Governments and Social Media companies are in the midst of a heated debate on how to regulate social media platforms. This can often fall into finger-pointing and mutual suspicion. For example, many Governments believe that social media companies like Facebook, Twitter and YouTube cannot be trusted to act in the public interest because they will always prioritise business interests. In my previous article “Policy Issues Facing Social Media Companies: The Case Study of YouTube”, I argued that social media companies are often not trading-off public interests for business interests. They are more often trading-off competing public interests, which creates many dilemmas that Governments may not understand.
This article goes a step further and argues that Governments must fundamentally shift their paradigm towards regulating social media companies, recognizing that social media companies, like Governments, are representations of public interests. Here it goes:
Proposing a New Paradigm for Regulating Social Media Companies
By enabling anyone to produce and share content, social media platforms like Facebook and YouTube have decentralized how information and opinions are shared in society. This has brought tremendous public value, such as freedom of speech and enabling access to education. However, it has also enabled individuals to spread hate speech, terrorist agendas and fake content, which can threaten national security and social harmony.
Some argue that the social media space should be completely free and left to the discretion of users. Users will rise up to counter offensive or fake material, or judge for themselves that these should be ignored.
This anti-regulation approach is irresponsible towards public interests. Targeted defamations and incitements to racist violence can easily go viral on social media platforms. Without swift actions by authorities, consequences to personal wellbeing and national security could be irreparable.
Some regulation is necessary to strike the balance between advancing free speech and protecting public interests such as national security and social harmony – the question is how.
“Co-regulation”: A New Paradigm In Regulation
I propose a new paradigm for how Governments regulate social media companies, which I term ‘Co-regulation’.
In the media space, Governments have traditionally seen themselves as guardians of public interest, enacting regulation to prevent content which violates standards of public decency. Governments must recognize that unlike traditional media companies, where content is generated by small group of individuals, social media platforms represent a broad base of content producers and users. Social media platforms, like Governments, are avenues for public interests to be represented.
Hence, Governments cannot see themselves as enforcers of public interest against social media companies. Instead, Governments and social media companies are joint stewards of public interests on social media platforms. This is the paradigm which undergirds ‘Co-regulation’.
‘Co-regulation’ has three components:
First, content standards should be interpreted and operationalized on social media platforms through an inclusive mechanism. When it comes to interpreting content laws, the scale and speed of the digital world make court decisions impractical. While it would be expedient to assign responsibility to social media companies to interpret and operationalize content laws, this would be unrepresentative of public interests. One idea is for Governments and social media companies to co-develop a swift mechanism which allows a spectrum of public voices to influence the interpretation of content laws in grey cases.
Second, Governments and social media companies should establish a system of public accountability. A good example is the Code of Conduct on Countering Illegal Online Hate Speech, established by the European Commission and four major social media platforms in 2016. It sets public goals for how quickly illegal hate speech should be reviewed and removed. Results are published on a regular basis.
Third, Governments and social media companies should both make commitments, and be held jointly accountable, to public goals. For example, while social media companies invest in systems to detect and review potentially illegal content, Governments should engage the public on what constitutes ‘hate speech’ and ‘fake news’, so that user-flagging is more effective.
Why Not Legislate the Problem Away?
By implementing a law which enables hefty fines for social media companies which fail to take down ‘obviously illegal content’, Germany has argued that without legislation, social media companies will not take their responsibilities seriously.
In my view, the costs of legislation generally outweigh the benefits. The upside – better enforcement – is limited. Business incentives to remove objectionable content are already in play: advertisers are social media platforms’ main source of revenue, and none want their ads to be associated with objectionable content. An advertisers’ boycott on YouTube earlier this year suggests that market forces are alive and well.
On the other hand, legislation can have dangerous effects. Placing legal responsibility on social media companies to identify the lawfulness of content on their platforms creates an incentive to err on the side of greater caution, i.e. more censorship. Beyond undermining the right to free speech, companies may inadvertently censor important public feedback, for example, on Governmental corruption. Besides, enacting legislation sends a signal that social media companies cannot be trusted to act in the public interest, which is inimical to the principles of co-regulation.
Governments worldwide should recognise social media platforms as legitimate representations of public interests. As co-stewards of public interest, Governments and social media companies hold joint responsibility and accountability for regulating the social media space in a way that best represents public interests. It is about time Governments and Social Media Companies work collaboratively under this new paradigm of co-regulation.
One of the goals of www.techandpublicgood.com is to bridge the worlds of Government, tech and business, which often hold a degree of suspicion towards each other. This article dives deep into controversial policy issues surrounding social media companies.
As a case study, it elucidates the challenges, considerations and dilemmas behind YouTube’s policies. This is me, a Government policy-maker, putting myself in the shoes of a YouTube policy-maker. I figure our considerations are similar despite our different contexts. If you know better than me on any of these issues, feedback is much, muchwelcomed.
The Unexpected Responsibilities of Social Media Companies
We live in an increasingly divided world. The forces driving these divisions, for example, rising income inequality, geopolitical, racial and religious tensions, were in play long before the advent of social media.
However, social media has provided a channel for divisions to widen. Lowering the barriers for individuals to share and ‘viral’ their knowledge and opinions has brought tremendous benefits, such as spreading education and freedom of speech. On the other hand, it has given greater voice and reach to malicious or ‘fake’ content. Algorithms designed to push us to what we will most likely click create an echo chamber, reinforcing our beliefs and biases.
When a flurry of social media companies took to the scene in the 2000s, their intention was to create platforms for people to find what they wanted – friends, funny videos, relevant information, roommates or hobbyist items. Very few would have imagined that their platforms would completely change how everyday folks conversed and debated, shared and consumed information.
Policy issues facing social media companies
Today, social media companies are adjusting to the new responsibilities that this influence entails. Here is an overview of the issues at stake.
Free speech and censorship
It is important to recognize the role of social media in democratizing how information is generated, shared and consumed. At the same time, not everything is appropriate to be shared online. Social media platforms recognize that they must have a moral view on harmful content that should be taken down, for example, content which aims to instigate violence or harm to others.
However, censorship cannot be overused. Social media platforms cannot become arbiters of morality because many issues are subjective, and it is not the platform’s role to make a judgment on who is right: The same LGBT content can be affirming for some, but offensive for others. When is it fake news, or merely a different interpretation? Here’s a real dilemma: let’s say someone reports an outbreak of disease on Facebook. The Government requests to take down the report until their investigations are completed because it will incite unnecessary fear in their population. Is Facebook best placed to assess who is right?
In general, a social media platform’s policy must identify and take down of content that is inherently harmful, while catering to subjectivity by providing choice – to users, on the content they receive, and to advertisers, on the content their brands are associated with. It is an intricate balance to strike, requiring nuanced, consistent policy backed up by a strong and coherent detection, enforcement and appeals regime.
Another policy area surrounds copyright. Individuals sharing content online may inadvertently or intentionally infringe on others’ copyrights. On one level, better detection of copyright infringements is needed. YouTube invested $60m in a system called ContentID, which allows rights holders to give YouTube their content so that YouTube can identify where it is being used.
What to do about copyright infringements is another issue. Should they be taken down immediately, or should the platform provide choice to copyright owners? Paradigms have shifted over the years in recognition that copyright owners may have different preferences: to enforce a take down, seek royalties or take no action.
A third category of policy issues surrounds managing users’ privacy rights.
First, how can the platform generate advertising revenues and keep their user base engaged, while respecting different preferences for personal privacy? This typically pertains to the practice of combining personal information with search and click history to build up a profile of the user, which enables targeted advertising. Information is sometimes sold to third parties.
Second, what does it mean to give people true ‘choice’ when it comes to privacy? Many argue that long privacy agreements which do not give people a choice other than quit the app do not provide people a real choice in privacy.
Third, should individuals have the right to be forgotten online? The EU and Google have been in a lengthy court battle on the right of private citizens to make requests for search engines to delist incorrect, irrelevant or out of date information returned by an online search for their full name, not just in their country of residence but globally.
Children bring these policy issues into sharper focus based on notions of age-appropriateness, consent, manipulation and safety. Platforms like Facebook do not allow users below 13. YouTube introduced ‘Restricted Mode’ as well as YouTube Kids, which filter content more strictly than the regular platform.
Similarly, higher standards apply to children’s privacy. Should companies be allowed to build profiles on children, and potentially manipulate them at such a young age? Should people be allowed to remove posts they made or online information about them while they were children?
Safety for children is also a huge issue particularly on interactive platforms where children can be groomed by predators. Taking into account privacy considerations, how can we detect it before harm is inflicted, and what is the right course of action?
The YouTube Case Study
I have not scraped the bottom of the barrel on the range of policy issues that social media companies deal with, but the broad categories are in place. Now let’s get into specifics of how social media companies have answered these questions through policy, implementation and resource allocation.
To put some meat on this, here’s a quick case study of YouTube’s approach. There are at least four components:
Enhancing user choice within existing products
Closing the policy-implementation loop
Strategic communications and advocacy
1. Product differentiation
Product differentiation is one way to cater to different appetites for content and privacy. In 2015, YouTube has launched ‘YouTube Kids’ which excludes violence, nudity, and vulgar language. It also provides higher privacy by default through features such as blocking children from posting content and viewing targeted ads, and enabling them to view content without having to sign up for an account. ‘YouTube Red’ offers advertisement-free viewing.
However, product differentiation has its limits because significant resources are required for customization. There is also a slippery slope to avoid: if YouTube rolled out “YouTube China” with far stricter content censorship, imagine the influx of country requests that would ensue!
2. Enhancing user choices within existing products
Concerning privacy, users who do not want their personal data and search/click history to be linked can go to the activity controls section of their account page on Google, and untick the box marked “Include Chrome browsing history and activity from websites and apps that use Google services”. For particular searches, you can also use “incognito mode”, which ensures that Chrome will not save your browsing history, cookies and site data, or information entered in forms. These are ways to provide real choices in privacy.
3. Closing the Policy-Implementation Loop
A robust policy defines clear principles which determine when content should be taken down or excluded from monetization opportunities and Restricted Mode. Implementation policy then becomes critical. With the large volume of content coming online every minute, it is impossible for YouTube employees to monitor everything. YouTube has to rely on user flagging and machine learning to identify copyright infringements or offensive content.
However, algorithms cannot be 100% accurate and often cannot explain why decisions are made. A robust appeals and re-evaluation process with humans in the loop is needed to ensure the integrity of the policy. More importantly, the human touch is needed to positively engage content producers (who hate to be censored).
In my previous jobs, we often quipped: “policy is ops”. It is no point having a perfect policy if enforcement and implementation simply cannot support it. Policy teams need a constant feedback loop with implementation teams, to bridge the ideal with the possible.
4. Strategic communications and advocacy
Finally, robust policy is necessary, but insufficient for social media companies. Strategic communications and advocacy are an absolute must.
Public criticism of a company’s policies can negatively impact business. Boycotts and greater Government regulation are examples. YouTube is swimming against a common but simplistic narrative that tech companies are simply trading of public interests in privacy and security for business interests such as the growth of advertising revenue.
Misperceptions about policies can also have dangerous impacts. A few years ago, Israel’s Deputy Foreign Minister met with YouTube executives, raising the issue of Palestinians leveraging YouTube videos to incite violence against Israel. She later released a statement which inaccurately suggested that Google would collaborate with Israel to take down this content. Google refuted this, but the nuance could have already been lost with segments of the public. YouTube’s policy of neutrality must come across clearly, even as lobby groups try to drag it into their agendas.
The purpose of Strategic Communications is to create a wide circle of advocates around YouTube’s policy stance so that negative press and misperceptions are less likely to take off. Elements of Strategic Communications include:
Going beyond the ‘what’ of policy, to the ‘why’. It is important to illuminate the consistent principles behind YouTube’s policy stances, as well as the considerations and trade-offs entailed. Channels such as blog posts enable this, since mainstream media is unlikely to provide the level of nuance needed.
Building strategic relationships and advocates. This includes entering into conversations and debates with your most strident critics, and building alliances with third parties who advocate your views.
Strong internal communications. Since social media companies themselves are run by an aggregation of people with different beliefs, it is essential that employees do not feel disenfranchised by the company’s policy stance.
Providing an alternative narrative. In addition, an important point for YouTube to make is that more is at stake than taking down offensive video content. Ultimately, we are all fighting against greater divisiveness and polarization in society. Although some elements of YouTube exacerbate this, YouTube can also make a huge dent in bridging divides. Hence, I love what YouTube is doing with “Creators for Change”, a program that cultivates creators who aim to counter xenophobia, extremism and hate online. These creators are working on web series on controversial issues, as well as educational workshops for students. They are using the YouTube platform to close divides.
It is far too simplistic to say that companies only pursue business interests, leaving Governments to protect public interests. Every new product, including social media platforms, is a double-edged sword, with the potential to bring us closer to or further from where we want to be as a society.
Both Governments and Social Media companies are trying to push us towards the first scenario. However, Governments will tend to advocate for more conservative policies as their primary objective is to minimize downside on issues such as national security, privacy and Government legitimacy. On the other hand, private businesses are simultaneously managing downsides while pushing the boundaries on issues such as free speech and revenue generation models.
A natural tension between these two positions is healthy as we decide, as countries and global communities, where we collectively fall on issues. This is how democracy works, after all.
Have you worried that your headaches are the result of a brain tumour, or that your child’s leg pain is caused by cancer? You’re not alone. You may well be a cyberchondriac: “a person who compulsively searches the Internet for information on real or imagined symptoms of illness.” If this sound familiar, you are in good company.
If you search “child leg pain”, google will auto-complete your search with “leukemia” – not because it is the most likely cause of your child’s leg pain, but because people who have searched “child leg pain” in the past were most likely to have clicked on links correlating this phrase with leukemia (probably because they wanted to understand the worst-case scenario). That’s how machine learning works – it pushes up the article that was most popular among other readers.
It makes sense to push up an article that most previous users clicked on – this is one of the best proxies for relevance to new users. However, the engineers behind search engines realise this isn’t necessarily beneficial for google users:
It’s scary– the average reader may assume cancer is the most common cause of child leg pain, or brain tumours are a common reason for headaches. Cyberchondriacs get even more paranoid.
It can encourage harmful behaviour– imagine if you search “best way to kill myself” and the top hits documented in detail the most painless way to die. Will the information push you over the edge in your decision?
Engineers behind search engines have to make a choice on what information to present to users – what people want (the traditional way) versus what they may need.
The Making of “Dr Google”
It was my pleasure to have Evgeniy Gabrilovich, Senior Staff Research Scientist working on health-related searches at Google, shed light on how Google thinks about it’s responsibilities to users. Evgeniy is addressing a sizeable group of Google’s customers. 5% of all google searches are health-related, 20% of which are people who type in a symptom hoping to find a cause.
Evgeniy’s team works on The Health Knowledge Graph, which aims to give users the best facts when they enter their symptoms. The Health Knowledge Graph does not replace traditional web search, it complements it. Try it out: Type in “chest pain”, “depressed” or “child leg pain” and you will get a side bar on the right which covers the ranked list of likely conditions, how common or critical the condition is, incidence by age group, etc. The center section still presents traditional web-search results.
When you type in a symptom you’re experiencing “child leg pain“, Evegeniy’s team aims to give you the most accurate diagnosis while minimising cyberchondria “Growing pains”.
Google realised that they didn’t have the expertise to do this on their own. It’s a huge technical challenge because of the large number of conditions and symptoms, and the overlaps between them. Furthermore, people use colloquial language to describe their symptoms, which the machine needs to decipher. Finally, user intent is often unclear. For example, if someone types in “weight loss” – are they trying to lose weight? Are they describing a side effect of medication?
Together with doctors from Harvard Medical School and the Mayo Clinic, they used machine learning to establish correlations between symptoms, conditions and treatments such that when you type in your symptom, you will get information that closely mirrors what a doctor might tell you (although it doesn’t go so far as to diagnose you… yet). Just to make sure, every result is evaluated by real doctors, who are asked “would you be comfortable with google showing these results”?
What does this mean for the medical profession?
Fifteen years ago, very few would have trusted medical advice that wasn’t from a doctor. Ten years ago, people started turning to the search engines for advice it wasn’t ready to give. Now, search engines are training themselves to give professional medical advice. They will only get better.
What’s next? I recently met a start-up, Mendel Health, which automates matching cancer patients to clinical trials through personal medical history and genetic analysis. Founder Karim Galil was previously a medical doctor. He was motivated by the fact that a single doctor’s brain cannot capture all information about diseases, possible treatments and clinical trials. He had patients die because he, as their doctor, was not aware of a clinical trial that could have saved their life.
Let’s take Karim’s idea a step further – suppose all my genetic, medical information and daily physical conditions (heart rate, glucose levels…) are constantly updated in a database that is linked to all potential interventions, treatments and medications.
While I am healthy, I can be alerted to risk factors and preventative actions (for example – you have a 50% chance of becoming diabetic in the next year. If you do X, Y and Z, the probability drops to 20%).
When I am ill, I can understand all my treatment options and the probability of success.
When a machine can diagnose me and recommend potential treatments, what will be the role of my doctor?
Much of what a primary care doctor does – assessing my condition, referring me to other specialists or recommending basic medications – can be encoded in software and search engines. Will they simply be a ‘stamp of approval’ – a safety blanket of sorts – before I take my next steps to get treatment?
Perhaps new roles for doctors will open up – for example, in training and verifying Dr Google as more and more people rely on it.
Complex surgical procedures will likely still require human attention. However, with robotic technologies like Verb Surgical, which enable top surgical expertise to be propagated across many doctors, will the average level of surgical skill required by each doctor be lower than before?
Why does this matter?
I honestly can’t envision a world with no doctors. Health is so close to our hearts that it requires a personal and emotional touch. However, it is important to understand how technology will change the role of the doctor:
This this will have large impacts on how countries train doctors (e.g. how long? what skills?), allocate resources (e.g. primary care vs specialists), and design incentives in their healthcare system (e.g. if patients have access to so much information, will there be a trend towards over-consumption of medical services? Do co-payments have to change?).
I am certainly not an expert in the field of medicine or medical technology, but would like to continue exploring this topic – especially from the perspective of what countries need to know, and how they should respond. Ping me if you are a doctor / work in healthcare and medical technology – I would love to hear your thoughts.
The story of income inequality is not new – as lower and middle-class incomes stagnate while the highest income brackets race ahead, the wealthy have access to goods and services that are increasingly out of the average person’s reach.
But we now see its detrimental effects more clearly than ever. I live in the Silicon Valley, and when news of Donald Trump’s election broke, the overwhelming feeling was disbelief. It was unimaginable. Tears of anguish were shed, yet a large part of the country celebrated. To me, that moment captured the deeper impact of inequality – fragmentation of society. Our politics become polarized, we are unable to find middle ground in our interests, and we increasingly feel like a nation of enemies, not countrymen.
While the problem gets more serious, our typical approaches to tackling inequality are reaching their limits. Redistribution is a political hot potato that pits the interests of the “haves” and “have-nots” against each other. Investing heavily in educational opportunities has diminishing marginal returns on social mobility both in the absolute sense (because the future of jobs is increasingly uncertain) and in the relative sense (because wealthier parents give their children more and more advantages).
We are in desperate need of new paradigms to fight inequality in cities. Here are two ways I believe technology can be a powerful, game-changing force – if deployed thoughtfully by cities.
First, cities should use technology to make life experiences in the city more and more independent of incomes.
It would be impossible to close the income gap completely, short of communism. A society where incomes are totally equal is also undesirable, as it erodes the motivation to work.
However, I believe that technology can make life in the city increasingly independent of income, which can go a long way towards mitigating the daily experience of inequality.
Let me start with explaining the notion of an aspirational good – things that people wish they had money to buy. In transport, most people aspire towards owning a car. In housing, it is a condominium or a private home (American friends: as opposed to a publicly-built Housing Development Board apartment, which 80% of Singaporeans live in). In healthcare, it is a private doctor or hospital bed – at your choice and convenience. In education, it is getting into top schools and universities.
There is an unsustainable dynamic behind aspirational goods. Because these goods are limited in supply, the more people can afford it, the more expensive they get, and the further out of reach of the average citizen. Aspirational goods are the sources of a huge amount of angst in the middle class.
Technology has the potential to overturn the entire notion of an aspirational good. By creating new forms of value, it can make the alternatives so attractive that even those who have money choose not to buy the aspirational good.
Take transportation for example. Owning a car is so attractive today because public transportation is an inferior option on many counts – the low cost cannot make up for its lack of time efficiency (it takes about twice the amount of time as a car ride), comfort (especially in humid weather), and customization (as a car owner, I know I can get a ride whenever I want).
What if public transport can be faster, more comfortable, more customized and cheaper than owning a car? With technology, this need not be a pipe dream. Imagine a day when you can wake up in the morning and your phone already knows where you need to be. It recommends the top three ways to get there. You select one, and within a minute, your ride shows up at your door – perhaps a shared car, or an electric bike if it’s sunny. It gets you to the train station just as your train pulls in. When you get out of the train, your minibus has just arrived to take you to the office. After work, you can summon a sleek designer vehicle for your dinner date. On the weekend, an autonomous jeep shows up at your door-step to take your family around for a day of fun.
You don’t need to buy multiple tickets – everything is paid through your phone. Or, you can even pay for transport just like a Netflix or Amazon Prime Subscription: a flat fee for unlimited rides. You never need to worry about parking again. With alternatives like this, how many people would still want to own a personal car? Even the wealthy may reconsider, especially if we simultaneously put in policies to make driving more inconvenient, such as no-drive zones in the city.
Just as technology brings about new forms of value (e.g. customization, flexibility) for those who don’t own a car, how can it do the same for other sectors?
How can technology help to transform Singapore’s public housing estates such that they offer new forms of value which private estates cannot provide? For example, how can we help HDB dwellers feel like the entire estate – with all its facilities and open spaces – is their home, one much bigger and diverse than any private estate? Digital communities and intra-town transportation may be aspects of this.
How can technology make a face-to-face doctors’ appointment something that people no longer seek as the “premium option”, for example, by making tele-health so attractive and pervasive?
I believe if domain experts and technologists put their minds to this, they will be able to come up with much better ideas than these! In short, technology can help catapult currently “inferior” options to equal status as “aspirational” options by creating new forms of value.
2. Second, cities should use technology to distribute scarce land and human resources more equitably.
In most countries, there is a healthy debate on how progressive and equitable the tax and redistribution regime is. However, not as much attention is paid to how other scarce city resources – land and manpower – are used. These too, must be used equitably, and technology can help cities achieve this.
Reducing the land used on roads is a great example of how we can use land more equitably. Roads and parking lots tend to be utilized disproportionately by those who own cars, who – in Singapore – tend to be wealthier. Can we cut down on roads and parking, and reallocate this land to purposes such as community facilities and public housing, which benefit a wider proportion of the population?
Yes, and technology is critical to this. How much land we need for roads and parking is determined by the concept of “peak demand” – the maximum number of vehicles on the road, ever. We can cut down peak demand by encouraging people to use shared mobility options rather than drive a private car (I write about how tech enables this here), and by investing in autonomous freight and utility so that these activities can be done at night, when roads are far emptier.
Public Sector Manpower
Similarly, we can use public sector manpower more equitably by investing in technology. Technology can significantly reduce the manpower we commit to customer services. For example, Govtech rolled out MyInfo, which enables citizens to automatically fill in their administrative information for Government schemes with the click of a button. Chatbots on Government websites will increasingly be able to answer public queries; phone lines will no longer be needed. Public sector manpower can now be dedicated to functions which are in great need of resources. One such area is social work and education. Families in the bottom rung of society often face a cocktail of challenges – divorce, low-income, lack of stable employment, cycles of incarceration and so on. Giving them (or their children) a real chance of breaking out involves an extremely high level of hand-holding and investment by social workers and schools. Resources are sorely needed here.
Access to top quality healthcare
Let’s take another scarce resource – top surgeons. People who can pay for their services access better quality care, and stand a higher chance at recovery. Technology can change this dynamic. Companies like Verb Surgical are using machine learning to propagate top surgeons’ expertise more widely. This is how it works: every time the best surgeons perform a procedure, every single action is recorded in a common machine “brain”. The “brain” is trained to associate each action with the probability of a successful surgery. As the “brain” records more and more surgeries, it gets smarter and smarter. Now, the “brain” is made accessible to ALL surgeons. At each step of their surgery, they are told what successful surgeons did. Now, the best surgical expertise is within the reach of the average citizen.
Technology that enables our scarce resources (e.g. land, public sector manpower and top surgeons) to benefit the broad population and serve those in acute need are the types of technologies that cities should invest in, and quickly enable through regulations.
Unfortunately, such broad and loose definitions give cities little guidance to on what to focus on in prioritising investments and regulatory reform, which is an incredibly important conversation given the limited resources at most cities’ disposal. It also does not paint a compelling vision for why being a Smart City matters, which disengages most of the population. Personally, before I worked in tech, I felt absolutely no connection to the idea of a ‘Smart City’. Tech was cool, but I never thought it was crucial.
I believe that using technology to tackle inequality and its effects should be a Smart City’s ambitious goal and guiding force, providing focus and rallying support from its constituents. This article spelled out two ways to do so.