Over the past decade, artificial intelligence migrated from computer geeks' workshops to something many people encounter in their everyday lives, but not without fears of its effects. Last year, the Pew Research Center assembled a panel of more than 1,300 experts, polled on the impact of increasing our reliance on algorithms—mathematical models underpinning artificial intelligence that aid in making decisions and completing tasks—and whose responses were boiled down into a set of themes.
In the Pew Center's report released in February 2017, the first themes expressed positive outcomes from the advance of these models, such as rapid growth in the volume of algorithmic uses and more benefits from data-driven solutions, guided in part by artificial intelligence. But the themes quickly turned darker, including the loss of humanity and human judgment as more and more algorithms replace human decision-making, and predictive models replace respect for the individual.
But what if artificial intelligence could be expressed as an extension of human reasoning, common sense, and knowledge? In this embodiment of artificial intelligence, algorithms and models reflect the accumulation and organization of knowledge, based on a shared understanding. Humanity and human judgment remain in charge but are assisted by the amplification of knowledge, brought about by today's infinite connectivity and memory stores.
This vision of artificial intelligence is the basis of New Sapience, a five-year-old enterprise creating a software platform for enhancing human reasoning by enhancing access to knowledge built around shared models of understanding. New Sapience was formed in 2012 by Bryant Cruse, a former data management systems engineer. At Lockheed Aircraft in the 1980s, under contract to the Hubble Space Telescope, Cruse began working with expert systems, an earlier form of artificial intelligence, where he developed an appreciation for both its possibilities and limitations.
Representing Human Knowledge In Software
After 1989, Cruse started two software companies, with the second enterprise, Altair Aerospace, bought up by a public corporation in 2000. At Altair Aerospace, Cruse developed an approach for representing human knowledge in software which that company applied to highly technical information.
In 2002, he left Altair to extend his work in artificial intelligence to real-world problems, which led to New Sapience's founding in 2012. Along the way, Cruse hooked up with software architect Karsten Huneycutt to get code development started. In 2013, New Sapience attracted its first outside investment from Bill Bandy, a serial entrepreneur and Ph.D. physicist, now the company's chief technology officer. In 2015, Sean Reineke, a former senior executive at Lockheed Martin with a background in computer systems and aerospace at IBM, joined New Sapience as CEO. (Disclosure: The author's spouse is a cousin of Bill Bandy).
Cruse's earlier work in space systems surfaced problems with current directions in artificial intelligence, particularly its inability to scale-up for large and complicated missions, which led to their founding the company. Most of artificial intelligence today, he says, seeks to create a machine that replicates the human brain, particularly neural networks that try to emulate systems of nerve cells in the brain. The processing of information in neural networks is called machine learning or deep learning, where underlying patterns in relationships are revealed, and those relationships are built into knowledge bases.
This approach to artificial intelligence, however, often wilts when the problems encountered are big or complicated or both. Today's artificial intelligence, says the company, amasses knowledge as individual data points for processing by machine learning in neural networks. All data in neural networks may be created equal, but while machine learning may reveal underlying patterns and relationships, there's usually no way to assess the relevance or value of these patterns or relationships. In short, there's no knowledge created, just a bunch of facts that may be related. There still may be value in the output, such as identifying mutations in genomes that cause cancer, but it often takes a lot of computing power and massive databases to get there.
This lack of an underlying model often leads to another problem: the inability to process ambiguous input. If you've had to explain, repeat, or refine voice commands to Siri, Alexa, or Cortana that use search engines and pattern matching, you know the problem. Say, for example, you spend too much time in the sun, and your skin turns red. If you ask these language processing systems today about red skin, you may need to explain further if you get a response about Washington, DC's football team or a type of potato.
In New Sapience systems, human knowledge is expressed as a form of applied epistemology, the study of knowledge itself as well as its structure and related belief systems. In this platform, knowledge is accumulated in a rational, deterministic way rather than piled on for later deep learning. Data in these systems are organized in models, constructed around classes, objects, properties, and language patterns, and governed by a set of rules and procedures.
Approaching Artificial General Intelligence
But data in the platform are kept distinct from models, and a separate query language, which New Sapience says achieves scalability. The idea of keeping data separate from models is hardly new, and the basis of the semantic web, pioneered and fostered by the World Wide Web Consortium. The semantic web enables the building of onotologies, vocabularies created with XML, the eXtensible Markup Language, for organizing human knowledge and meaning in networks over the web.
New Sapience, however, is applying this idea to artificial intelligence, and creating software products intended to extend and enhance human knowledge for individuals, groups, organizations, and communities. In fact, its principals believe the New Sapience method is unique from most other embodiments of artificial intelligence, even approaching the elusive goal of artificial general intelligence that performs intellectual tasks comparable to human thinking.
The company is now creating its core world model with about 2,500 terms, which it says represents an initial critical mass of knowledge. This set of terms, according to New Sapience, approximates the language comprehension of a first-grader, but enough to get started and bootstrap further development of its platform. The company says its systems also have a smaller footprint, with the code running its platform a small fraction of that written for many of today's artificial intelligence systems. New Sapience received a U.S. patent on its technology in March 2016.
The first products from New Sapience, expected late next year, are called "Sapiens," designed to work at first like personal assistants such as Siri or Alexa, but provide more context and reveal inconsistencies, ambiguities, or falsehoods. Sapiens are also envisioned to operate in groups, attach to devices in Internet of Things systems, and be configured in private and public networks for individuals or businesses. The company expects to develop its own product line at first, then license the technology for further uses.
In the Wizard of Oz, the scarecrow character—played by Ray Bolger in the 1939 film—was seeking a brain, when all along he expresses an innate wisdom. In much the same way, artificial intelligence developers today are seeking to create a synthetic brain, when in fact they may be better off enhancing the innate intelligence and wisdom in human beings.