Commentary

Esther Dyson Talks About AI, Real-World Issues

In the ‘80s, I worked for a publisher that chronicled technology’s progression from mainframes and mini computers to wide- and local-area networks, PCs/Macs and home computing.  

This decade was as exciting and fast-moving as the advent of the internet and the social and mobile innovation waves after that. At the epicenter was a woman named Esther Dyson, who wielded massive influence through her industry newsletter Release 1.0 and her conference PC Forum, which was religiously attended by many -- if not all -- of the CEOs and executives who mattered, including Bill Gates and Steve Jobs.  Talk about a woman thriving in a man’s world!  

Dyson completely bypassed the glass ceiling, instead creating her own brand of thought leadership, uninhibited by an even more male-dominated marketplace than is the case today.  

Later on in the digital era, she invested in the early launch stages of behemoths like Google, Facebook and LinkedIn (indirectly), Zillow, Evernote and Square, to name a few; was/is on the boards of a diverse array of notable companies such as Meetup, WPP Group, Evernote, Yandex and 23andMe; and has been an angel investor in dozens of other firms that focus on a variety of tech categories from health to space travel.  

So when a friend introduced us, I jumped at the chance to ask about her thoughts on AI.   

Dyson’s experience with AI dates back to those days in the early ‘80s when concepts were still emerging.  She wrote an issue of her newsletter covering AI pioneer Doug Lenat’s ontology-based Cyc knowledge base (among other AI companies), and he liked it so much that he offered her a job entering knowledge -- which was how AI systems acquired their knowledge in those days.  She politely declined.   

Dyson has a deep historical perspective also informed by her recent personal investments in AI companies like Geometric Intelligence (now part of Uber), which uses models to learn more efficiently; Init.ai, a chatbot SDK platform; Turbine.ai, focused on virtual/simulated clinical trials; Powerset search technology and Medstory (both now part of Bing); and Syllable [asksyllable.com].   

But AI is not necessarily her focus. She looks for companies solving big problems, and AI may provide a means to that end.  She noted, “I’m a techie intellectual, so I always appreciate elegant engineering. But I’m also a pragmatist, and I look at AI as a utility that makes a thing work -- not as ‘the thing’ itself.”  

We talked about the maturity of AI across the spectrum of its manifestations, and Dyson referenced a development that has continued to progress from the early days. “The AI we knew back then was so-called ‘expert systems’ -- basically, logic -- and that is really mature now. We no longer call it AI.”  

She noted, however, that newer approaches are still evolving, like some forms of natural language understanding or sentiment analysis, which are “more neural-netty," and pattern recognition.  

Dyson said that companies using these types of technologies can get to, say, 95% accuracy in performing tasks within a reasonable amount of time, but “All the money and effort spent is in those last few percentage points to get to so-called 100% accuracy -- and they have to get there in order for it to ‘work,’ because a 1% error rate can cause a 25% cost increase.

“But in the end, humans aren’t ‘100%’ either…  In fact, I think that errors are actually the foundation of creativity,” she said.  

“The most interesting challenges don’t have a ‘100%’ answer -- like, which [job] candidate should you pick as CEO?  That depends on what kind of company you want.”

Many AI technologies are getting better in real time. Dyson uses the example of automated spelling correction, which has now morphed into word and even phrase or sentence prediction, because the algorithms pick up on patterns in the way people write.

At some point our writing platforms moved past spellcheck and started correcting grammar and predicting what else we want to write. Did you notice when this happened?  Dyson noted, “The point of AI is to not be noticeable.”

Think about that and try to identify where AI shows up in your daily life. AI technology is often working in the background; you may take the conveniences for granted and never recognize them as AI.  

The other reason you may not think about this is because,“once AI is successful, we usually call it something else: facial recognition, for instance. We don’t say ‘AI facial recognition.’”

I asked Dyson if she had cautionary advice relating to AI, and she noted the well-worn example of the way algorithms use our behavior (or that of our peers) to help us make choices in a world with too many options for us to sift through ourselves -- whether it’s videos, job opportunities, which shoes to buy or which potential “friends” to engage with.

“We need to be very aware of the influence of the data used by the algorithms in our lives.  People make the decisions; AI just makes us more efficient in reapplying the criteria and biases of people’s decisions in new but similar situations,” she said.  

“The most important thing to consider is that much of what happens is under our control, but ‘our’ is an ambiguous concept….” 

Who exactly is the “we” whose behavior the system models?   The problem is not the algorithms; it’s the data the algorithms use -- and that data reflects the decisions or behavior of thousands or even millions of normal, flawed individuals.

Given Dyson’s long perspective, both backwards and forwards, I had to ask where we are headed with AI. Will computers develop consciousness?  What will life in the future be like when AI is fully deployed?

Her response:  “Consciousness: I think it’s an emergent property; it may emerge from the interaction of multiple systems. Or it will emerge from their integration with human brains… But in practical terms, we don’t need machines to have consciousness. We want to retain the consciousness for ourselves and have the machines support us.  If they end up with both consciousness and purpose, maybe we should just hand things over to them.

“But in the meantime, the most important thing for human beings is to have long-term purpose, not just consciousness. And the way for that to happen is for us to have meaningful jobs. 

“If I were running the world, I would get people to pay attention to the long-term economics of collective investment in health and education, and pay care-givers and coaches and teachers what they are worth in the long run -- not just what the market will bear in the short term. In the long run, we would save money, and we would also be providing purposeful jobs that cannot be done by machines.” 

This is not hollow preaching. Dyson is putting her money and time where her mouth is as executive founder of a nonprofit called Way to Wellville, a 10-year project helping five U.S. communities build the capacity to keep their residents resilient and healthy, rather than wait until they are sick enough for expensive and often futile clinical care.  

The goal, she says, “is to show the value of investing in health rather than renting it. We’re helping the communities build their own sustainable capacity, in approaches like pre-/postnatal care, healthy school food, diabetes/obesity prevention and mental health counseling. It’s not a nice white lady from New York with a program; it’s helping each community reach its own goals.”

She hopes the the Way to Wellville example will inspire other communities to copy it in their own way, and that employers, governments and other institutions will start investing long-term in our greatest asset: human minds and bodies.   

And technology is still a means to an end for Dyson:  “AI will continue to make our lives more efficient, easy and well-regulated.  Whether we get to regulate ourselves, however, or we are regulated by big government and big business….that is up to us -- whoever ‘us’ is.”

I’m hoping “us” includes more people like Esther Dyson.

1 comment about "Esther Dyson Talks About AI, Real-World Issues".
Check to receive email when comments are posted.
  1. Henry Blaufox from Dragon360, July 27, 2017 at 4:25 p.m.

    On getting to 100 percent accuracy, or any measure of perfect - there is no such thing. In our industry, getting to 80 percent accuracy or so is common enough (Dyson's mention of 95 percent is a bit of a surprise.) To get beyond that, diminishing returns kick in. The cost for incremental increases in performance rises steeply, sometimes exponentially. So the return may not justify the expenditure. For ad tech overall, why not consider it this way: if we achieve 80 to 85 percent accuracy, we know we are hitting the target four time or more out of five. For ad targeting, viewability, load guarantees, isn't knowing you are right four times out of five more than justifiable? Before these technologies came long, we didn't know how well we were performing atll, especially in real time.

Next story loading loading..