The field of AI has progressed significantly in recent years. Breakthroughs in machine learning have enabled computers to mimic humans in areas such as image understanding and speech processing. However, many problems remain unsolved, such as teaching computers to read and understand anything. It will be decades before we have Artificial General Intelligence which surpasses human intelligence, including runaway bots that humans can no longer control.
For now, we must be realistic about what AI can achieve for several reasons.
The Risks of Unrealistic Expectations in AI
First, unrealistic expectations create incentives to deploy unsafe or unreliable systems. Take autonomous vehicles for example. AI systems enable a car to accurately perceive its environment, plan and execute its path. Ensuring that these systems work safely and reliably together is a mammoth task. There is a reason Waymo has not made commercial deployments despite having tested for years.
Unrealistic expectations from investors and customers on the timeline for commercial autonomous vehicles may pressure companies to deploy unsafe or unreliable vehicles prematurely. Uber’s deployment last December was one such example. In this case, regulators were able to clamp down quickly. However, cases will become less clear cut in the future, and will depend on self-regulation of technologists. Premature testing puts human life at risk, and when public opinion plunges due to an accident, the entire industry is delayed from delivering the enormous safety benefits of autonomous vehicles.
Second, hype about Artificial General Intelligence misfocuses our attention. We often forget that as technology has progressed, humans and organizations have historically evolved to master it. We should focus our attention on facilitating this evolution, for example, by enabling a cycle of re-skilling and job re-design so that people increasingly play roles which machines cannot. New value arises when people focus on what they do best. For example, pharmacists in Singapore provide better patient counselling now that intelligent robots manage medicine packing.
Finally, AI researchers need a reliable stream of funding to make long-term investments in basic research that yield step changes in the field. Unrealistic expectations make the field of AI susceptible to funding crashes, hindering progress – the history of AI winters demonstrates this. One side effect is that during these winters, only large players with deep pockets can continue to cement their advantages. If this privilege is not used responsibly, what does this mean for the distribution of benefits arising from AI? On a side note, this is why I think that Governments should commit long-term funding to basic AI research.
How Can We Better Distinguish Reality and Hype?
The hype surrounding AI can be detrimental, but how can we help people better distinguish reality and hype?
Personally, I feel that reality and hype are best distinguished in the context of problem-solving. The AI community needs to work closely with people who own problems such as improving preventive health, boosting educational equality and solving unemployment. What can, and cannot be solved through AI? What are the risks? How can AI systems be designed to supplement humans? Google set up People + AI Research (PAIR) to explore this. However, the onus is not on the AI community alone. Problem owners must become advocates and critics of AI in their own contexts. They must play a role in public education.
One challenge in distinguishing hype from reality in AI is the competitiveness in the community. The market incentive is for emerging companies to upsell what AI can deliver to raise investment.
On some issues, the AI community needs to lay down their guns and unite. Educating the public to differentiate hype and reality in AI is one of them, and big companies have to take a disproportionate share of responsibility because their existence is less dependent on their ability to upsell. The technical community must also work together to solve problems such as AI safety, which should not be a basis for competition. This is the intent of the Partnership in AI. I personally hope to see the Partnership build strong relationships not just within the commercial community, but with third parties such as Governments and Non-Profits who, in some contexts, are trusted as neutral arbiters on technology issues.
Ultimately, what is at stake is the tremendous value AI can bring to humanity if it progresses quickly, safely, and with the trust and collaboration of users. To this end, fostering realistic expectations about AI is instrumental.