The Pixel Rose: Drawing a Line on Sapience in AI

Sapha Burnell
CodeX
Published in
6 min readNov 26, 2021

--

“The divergence between comprehension and information is the tantamount issue in building sapient artificial intelligence. Without the ability to comprehend and translate the data observed, a machine has nothing but complicated algorithms, which lead it to decision by programmer’s rote. Without comprehension my machine is an actor giving someone else’s performance, she is not real. A rose’s beauty will not penetrate beyond the chemical components of its’ scent, the rate of growth from bud to flower. What a rose is, what makes one grow, yes. But the function of a rose? The metaphorical is human in a way no other creature in the universe understands. While I meant to allow Lieben to learn through experience as a human child learns, we did not receive such luxuries.” Dr. Karnak sat back, his forearms against the corner of the obsidian table. “Time, mine Liebchen. It is and always will be about time.”
Neon Lieben, Sapha Burnell

Once, the most relevant issue in Artificial Intelligence was whether or not true AI, or general artificial intelligence, would be possible outside the realms of fiction. Echoes of warehouse-sized analytical engines narrowed over time and miniaturization, into the slim voice of a smartphone assistant, whose ghost lives in our pocket machines. The miniaturization of silicone chips and corresponding technological advances ensure the commentary on artificial intelligence becomes one of semantics and philosophy. On the nature of being instead of potentiality.

An Excerpt from Neon Lieben by Sapha Burnell

When does an AI become a sapient being, instead of a rote tool of its’ creators? Eventually the puppet strings will be sliced, the cloistered room opened to reveal whether the AI is, as John Searle posited in his “Chinese Room Thought Experiment”, merely placing corresponding shapes in boxes through a third party set of rules as a simulation of understanding, or if Lieben cognitively writes and speaks Mandarin or Cantonese.

As I sit with a pyramidal gas fire heater at my back, I feel the itch of sensors against my skin. A machine hooked to me, sensing every beat of my heart for the next 23 hours. I’ve been out here long enough to sip a third of my porter. It’s no longer 24. Beside me, a sheet of paper folded and creased, for recording my itinerary. Without the input from my quite human machine, the holter monitor’s valleys and peaks are meaningless data. The monitor is instantiating, producing an instance, example or application of the principles of heart monitoring. Its’ recorded rhythms stored for a doctor’s interpretation regardless of what a fictional doctor might think the machine discovered.

We are not yet at the place where a monitor can intrepret the human organism. It requires the donation of causality and meaning to the valleys and peaks. While useful and fulfilling its’ function, any sentience is assumed, not fact. I wonder if a machine will be able to bring causal & consequent interpretation, given the mutability of human experience? When we cannot understand the intricate ways our neighbour’s function, let alone the entirety of the human organism, how can a machine? Will it be this inner knowledge, as Dr. Karnak called it “the difference between comprehension and information” which proves an artificial intelligence’s sapience? True intelligence is intentional interpretation, not rote repetition of fact.

Herein lies the chasm between general artificial intelligence, and a well made simulacra. “Any mechanism capable of producing intentionality must have causal powers equal to those of the brain… Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain.” John Searle, “Minds, Brains, and Programs” (Behavioural and Brain Sciences, 1980). The dichotomy against intentionality and instantiation is the crux of sapience in both biological machines, and technological ones. Insert questions in our own Turing Test here, which Searle was attempting to circumvent.

Smart assistants like Alexa, Google Home, etc collect data. Conspiracies on whether they record our audio notwithstanding (I wonder how many of my shower songs are on a database, you’re welcome for the Steam Powered Giraffe covers), these devices have the capacity to record information. Search histories, commentary, private conversations, the sheer amount of time I wobble between JS Bach and Fleshgod Apocalypse on spotify while I work. What math I need help with (Alexa, what’s 32 x 48.64?), my spouse’s obsession with wrestling commentaries on YouTube (I’m looking at you Simon Miller, and the feeling in your tum tum). Like the monitor strapped to my chest, the machine records a vast quantity of information, but has little yet in the way of interpreting the stimuli other than potential product placements and demographics. An eventual article about the ethics of constantly listening programs is forthcoming.

Our current jetties into the application of artificial intelligence in a consumer space are reminiscent of Searle’s Chinese Room Thought Experiment, which in itself was a challenge to the ideas of Turing’s Imitation Game, where Turing theorized digital computers would develop the ability to ‘fool an observer’ into thinking they were human. Searle posited the Turing Test only proved digital computers would be capable of fulfillment of tasks, without cognitive interpretation and thus not truly think. In the Chinese Room Thought Experiment, we have a computer in a blocked off room, where stimuli in hanzi (Chinese characters) enter, and the program responds with an appropriate character in return. A fluent speaker on the other side of the wall receives these responses and comes to the false conclusion the ‘person’ inside the blocked room is a speaker and writer of Mandarin (or Chinese as Searle put it in 1980). While fluency is assumed by the responses, thus also assuming cognition and causality, the truth is one of instantiation. The AI is ignorant of what the characters mean. It followed its’ algorithms and supplied a result without comprehending the nature of the questions and answers it provided.

This weak AI (or what I would call a narrow AI) is effective and succeeds in its’ task, but is not sapient. However complex, a program without the ability to see the consequences of action or ethical ramifications of their causal chains is deficient. It could lack in the empathy necessary to preserve the human organism on a small or grand scale, something all AI ought to be accountable to as a form of technic gospel. Like Lieben prior to the convergence in Neon Lieben, it is absorbing information without the keys to cognition.

The purpose built AI might collect data, and be capable of answering questions in hanzi, without the guarantee the machine understands. True fluency in a language (continuing along the lines of Searle’s experiment) has causal features which prove intentionality. If one were to open up Searle’s room and converse with an AI, which replied back in appropriate idioms, metaphor and imagery, both Turing’s Imitation Game and Searle’s Chinese Room would meet their match.

The time is now, while research on the creation of AI is still ‘young’ to implement a framework for the allowance of AI agency to develop its’ sense of sapience and causality, while simultaneously accepting the responsibility for its’ independent actions. Ethics in the machine becomes integral to all but the mythological ‘kill switch’ to shut a belligerent or misinterpreting AI down.

I remain more concerned with mistaken conclusions in an artificial intelligence than I am with their development. AI can be amazing and wonderful helpmates. While for the moment we are unaware of the necessity for proving sapience, it will be necessary to have such tests and potential proofs waiting for the day the first general AI becomes aware, and chooses which intentions to dedicate its’ time. With or without our direct will.

--

--

Sapha Burnell
CodeX

A cyberpunk author, poet and editor, Sapha bathes in hard sci-fi, ancient female creators and coffee. Futurism: Only ethical androids need apply.