By definition, the project of artificial intelligence has largely been concerned with replicating human capability for logic and thought. The engineering history of AI reads like a narrative conflict of man versus himself, driven to model the next great feat of thought, capability for strategy, and even humor and punning. Those engineering benchmarks met, we pit man versus the machine in highly publicized feats of strength where AI bests man in his greatest games of chess, go, and Jeopardy.
In focusing on solving the problem of replicating and besting human capabilities of intelligence, we often miss the fact that in manufacturing and engineering these systems, man has created alternative intelligences, ones that mimic though importantly differ from that of our own.
Summing up the proceedings from AI in Asia, Urs Gasser reminded us that the complexity of AI is bi-directional—that is both in the complexity of modeling human behavior, but also in understanding the output of AIs. One of the most puzzling and concerning themes emerging from the discussion last week centered on the problem of interpretability of AI systems, their mechanisms, logics, and judgements. Engineers often can’t understand, much less explain how cycles upon cycles of pattern recognition and interpretation produce a certain interpretive computational output.
The AI and machine learning inscrutability problem poses very serious challenges for governance and oversight of AI systems. Not only are these systems proprietary, much like algorithms, they often behave unexpectedly and inexplicably. Recognizing this challenge, early discussions about a European right to explanation accounting for the outcome of an algorithmic judgement only suggest that an outcome that can’t be explained is non-binding. This fix seems a feeble shrug in the face of this serious constraint. It is clear we need more creative, imaginative ways of thinking through this challenge.
—
Along with my critical provocation partner in crime from our days at the Berkman Center,Tim Maly, we orchestrated an opening ice breaker for the conference that got people to think concretely about their own experiences and expertise, while pushing them to their imaginative limits right away. In small groups, we asked participants to share a recent encounter with a smart or intelligent object or interface. In other groups, we shared a character from our childhood, mythology, folklore etc. that could serve as inspiration for the personality of an intelligent assistant or AI. Each time, we wrote our objects and our characters on index cards. Then, mashing up these objects and personalities cards, we pushed the conversation one step further imagining what it might be like to interact with these AI systems if the personality of the character inhabited, or infected perhaps, the smart object.
Over lunch, I spoke with a young participant with experience in music recommendation systems. Her random card pairing matched the spirit of the Burning Man—radical inclusion and communalism—with facial recognition technology. It was hard not bring this combination toward a literal scenario, that is, if facial recognition technology were deployed at the Burning Man festival itself. It’s not so farfetched, of course, given Silicon Valley’s interest in participating in the event. But in a free-for-all meant to give people space to be whoever and whatever they want to be complete with costumes and experimental drugs, it seems like just the wrong place to explore intelligent identification systems that could link your life and history back to the real world, just by scanning your face. But the surprising juxtaposition of cards offered an interesting thought experiment, nonetheless!
Rahul Batra’s presentation on machine learning and language processing of Burmese in Myanmar surfaced details that illuminated how important it is to examine concrete examples from across the globe. Along with a rapid increase in mobile penetration, misinformation and hate speech on Facebook have spread anti-Muslim sentiment and a wave of violence against Rohingya Muslims. These precedents serve as object lessons and early signals for more global, systemic problems that machine learning is arguably both enabling and is also a solution for.
Bringing in perspectives from the art world, Dani Admiss introduced us to speculative works that explore the boundaries of the technology and pose important cultural questions. Consider being able to purchase a car based on its ethical AI temperament, to reflect your own value system! The answer to the trolley problem is outsourced to the user, rather than the manufacturer, as posed in Mattieu Cherubini’s Ethical Autonomous Vehicles.
The afternoon breakout sessions focused on specific applied topics, everything from intellectual property to governance to creativity. Eager to follow the thread considering the way these machines “think” and “act,” I joined the creativity group. Discussion covered everything from creative constraints, boredom, the differences between generativity and creativity, to novelty and discovery, and finally true genius. If machine learning and AIs are always products of the constraints that the developers determine in shaping the system, can the AI truly ever be credited with creativity? Is it possible to design AI systems to creatively break the rules? Certainly AIs are good for innovations in efficiency, novelty, thoroughness. We considered how we may still rely on humans to judge the real “creativity” of AI outputs. And the more balanced and interesting near-term application looks more like something that focuses on creative partnerships between man and machine, in concert.
Throughout the day, it became clear that cultural differences and language barriers continue to pose limits on the generalizability of ethics and values, even as we aim to embed them into systems or shape oversight and governance. Even a discussion about AI creativity is inflected by an imposed western idea of the lone, individual creative genius as opposed to something more collaborative, and discovery oriented. Urs summarized this as a mindset problem, one needing room for lots more imaginative thinking, new vocabulary, and where the arts and storytelling become incredibly useful tools for thinking. We need to keep stretching our imaginations further. We may not ever be able to comprehend what it’s like to think like an AI. There will always be a gap between man’s experience and that of the AI, an “othering.” But it’s interesting to think about ways of how we understand other minds, whether that be in developing minds of children, the minds of domestic animals (smart as a puppy), or of entirely fantastical and foreign characters.
This feature was written exclusively for Digital Asia Hub. For permission to republish or for interviews with the author please contact Dev Lewis.
- Encounters with Other Intelligences: Reflections from Digital Asia Hub’s AI in Asia Conference - December 2, 2016
- Tech Criticism is Critical to Understanding Digital Asia - October 21, 2016