Philosophy in the Real World

This is the transcript of a talk I gave to Secondary 4 students at Raffles’ Institution on 30 Sep 2016.

 

Hi, my name is Jonathan Sim. I am a philosopher and I work at Nanyang Technological University (NTU), Singapore.

Let’s discuss this question today: how is philosophy relevant in the real world?

You’ve taken classes in philosophy. And you might probably be wondering: what’s the point?

Some of you may say: “Sure, ethics might be useful, as it can help me decide what is right or wrong.” Or some of you may say: “Some aspects of logic might be useful: it helps me develop good reasoning skills.” Some of you may say: “Philosophy is really interesting but it won’t be able to feed me, or help me make money.” And some of you may even say: “I think it’s rubbish, I don’t need this.”

So, what practical use is philosophy in the real world?

What’s the point of asking whether or not I live inside a simulation, or whether human nature is good or bad? What’s the point of asking whether what I know is true, or whether the table in front of me exists?

How are all these relevant to the real world?

Sure, the philosophers in the past several centuries were able to contribute a lot to the world, but that’s because back then, the only subject taught in school was philosophy! But what about now? We have the sciences, and engineering, we have practical disciplines that train you to make a difference in the world. So why philosophy?

So let me share with you my experience working as a philosopher in NTU over the past three years. Allow me to share with you the many interesting ways that I’ve seen philosophy and philosophers in action in the real world.

My main project involves creating online videos on Chinese philosophy. Aside from that, I work very closely with a research centre, known as Para Limes (which means Beyond Boundaries). It was a special project initiated by the President of the University, the Nobel laureate, Prof. Bertil Anderson. The centre is driven very strongly by the conviction (which Prof. Anderson and many other Nobel laureates share) that the next world-changing breakthrough is to be found at the interface of disciplines, of academia, government and industries.

In other words, the next major breakthrough is to be found where various academic disciplines, government and industries meet and interact. This is a serious conviction, and the university makes it a point to bring in the top scientists, mathematicians, doctors, policy-makers, civil servants, ambassadors, Nobel laureates, and, philosophers. In fact, some of these people are on Time Magazine’s List of the 100 Most Influential People of the World.

I have had the honour to sit at table with them, to discuss many of these important issues. And it has been very insightful.

It’s very interesting how the latest scientific discoveries have opened up so many philosophical questions. Let me give you one example.

Recent medical research has found that our gut bacteria have an incredible influence on our neuro circuitry, on our thoughts, desires, and consequently, out actions. What we eat not only changes our gut bacteria for better or for worse, but it also changes who we are. Literally, we are what we eat! More interestingly, research has even found that you can cure a person with severe autism by transplanting bacteria through faeces. That’s right, human poop, from a healthy individual to one with autism, and voila, autism cured. That’s how much the bacteria inside us influences us as a person!

This has led many scientists to begin asking very philosophical questions as a result of their findings. Are we our gut bacteria? (Or how much of the gut bacteria counts as us?) Can we affect who we are by changing our diets? If so, then shouldn’t the issue of what we eat also count as a moral problem? This is where philosophers enter into many of these scientific research, helping them to make sense of the questions that arise from this.

Beyond the research, science can only tell us what is, it can only tell us facts about ourselves and of the world. But facts alone cannot directly translate into action. Science lacks the tools to prescribe what we should do in most situations.

In the example of gut bacteria, it forces us to really think hard about who we are and what we are. If diets change the way we behave and act, should we punish people who don’t eat properly as a way to prevent crime? Is it fair that wealthy people can afford to properly nourish their children? Should we create a class of super humans through a diet that will best enrich their gut bacteria? Should there be government policies to control what we eat?

This is where policy-makers turn to philosophers to answer the philosophical questions that arise from such scientific research. Do we have this going on here in Singapore? Yes. We have philosophers in the Centre for Biomedical Ethics, where philosophers and other specialists help to answer questions like this. Ok, they’re not working on that now, but they do deal with philosophical problems that arise in the course of research. The same is true elsewhere in the world.

Ethics aside, there are other important questions. What does it mean to be human, what does it mean to be me? How do I understand myself?

How we understand ourselves will affect a lot of how we live and interact with other human beings. To put it simply, there will be drastic changes to our lifestyles depending on how we answer these questions. Who’s interested in these answers? It’s not just the government, but businesses who want to sell the next big thing when the next cultural wave takes over. And they are seeking insights from philosophers to help them make the next business decision.

But perhaps, one of the more interesting revelations I had was to see many of these brilliant minds come to the agreement that the sciences and social sciences have hit their limits, that these disciplines have hit a brick wall. And that the problems they are dealing with require philosophical inputs to aid in their search for solutions. They echo: Science can only explain and describe, but it cannot prescribe action.

Scarily, in some areas of science, scientists are finding that their models have great powers of predictability, yet no one understands these computer models or why it works – it just does. There are many top academics and policy-makers who are very worried about that. How can we use what we don’t understand?

On top of that, the top economists, central bankers, and even government officials I’ve met are saying: all the economic theories that are taught at university are wrong, and we’re making too many false assumptions, we’re making too many bad policies!

In some of these discussions, they would turn to me, and half-jokingly ask: What does the philosopher have to say? They know that I’m quite new and wouldn’t have much to contribute, but they are indeed serious that philosophy is required to rise out of the difficulties they face.

So, what are philosophers doing elsewhere in the world?

I met the former director (now retired) of the Rathanal Institute, in the Netherlands, a political research think-tank. He was very proud to boast of his team of philosophers whom he employed to solve a variety of problems in the Netherlands, such as migration, unemployment, etc.

He recounted how his team of philosophers came to the aid in a legal trial against a man with mental illness who had been charged with murder. The philosophers argued in court about just how much responsibility he had for the crime. It was their philosophical input that helped the court decide just how culpable the man is.

I also had the opportunity to interact with people from the UN. They were interested in learning more about Chinese philosophy, so one of them spoke to me about it. Turns out, to my surprise, they publish and circulate official papers on philosophy to stimulate new ideas for policy and governance within the organisation. Yes, philosophy still plays a big role in influencing the ideas of policy-makers even today.

And I think we live in very interesting times. Our own civil service is starting to recognise this, and they are embracing philosophy and philosophers in their decisions now.

I met a philosopher from Germany who has been coming in and out of Singapore because the top ranks of our civil service have been consulting him. He is by far the most interesting person I have ever met. He has been using his research on space and time, and his other philosophical works to consult and advice world leaders. In fact, he was personally involved in carrying out the negotiations between the US and the Soviet Union, and facilitated the very process of nuclear disarmament between the two sides. He was also the personal advisor to Nelson Mandela after Mandela was freed from prison.

Here is a philosopher who means business and is actually using his research and philosophy to change the world and Singapore too.

Now, I’ve also met some civil servants here – with some background on philosophy exploring the different conceptions of time and space, on the metaphysics of the relations of economic entities, and more. All these with the purpose of rethinking and crafting better policies.

 

It is through these experiences that I’ve had with so many interesting people in my years at NTU that has left me a deep impression of just how important a role philosophy still has to play in society.

How will all these technologies change the way we think and perceive the world? How will all these advancementschange the way we behave towards one another? Will we change the way we think about ourselves? How will our society change? Is this a good change or a bad change?

Business people want inputs to these philosophical questions, not just because they’re unsure whether a technology is good or bad for society, but also to help them better understand the conceptual changes that will impact them and the work they do.

One example. Insurance has, from the very beginning, dealt with physical objects. From houses, to cars, to cargo, to horses and cows. If it means a lot to you and your business, you can insure it. But the insurance industry has a new problem, a philosophical problem. How do you insure digital content? If I copy a file from a computer to a hard disk, the file is still in the computer. There is no loss of data, maybe just a loss of earnings (and even that is debatable). It’s not like the traditional form of insurance where there’s an actual loss of something physical. So how do you conceive of non-physical goods in a way that is sensible to insure? Till now, the insurance industry has problems figuring out how best to insure digital content because they simply haven’t solved the philosophical problem of the ontological status of digital goods.

Let me give you another example. I met a director of an IT company. He says that he often encounters problems with making certain decisions. How do you choose if you none of the options are the best, and for that matter, they’re all just as bad?

Iney, menee, miney, moe? Or do you just flip a coin?

These issues may not require philosophical content, but they do require a certain amount of philosophical training to help you come to a sound conclusion. And this is the kind of skills that employers are looking for to help solve the tough problems they face. The director of the IT company? He told me: I wished I had philosophers in my team. We deal with these kinds of problems almost every day.

He’s not the only one who wants philosophers. Consulting firms like Cognizant, recognise the value of philosophical training to solve difficult problems. They specifically ask for philosophy graduates.

Now, to be clear, I’m not here to tell you to go study philosophy and pursue a philosophical career. I’m just telling you about the role of philosophers and philosophy in action out there in the real world, in government and politics.

It’s fine if you tell me: “Mr. Sim, I think philosophy is too abstract. I don’t like it.”

I’m cool with that. You are free to choose. It’s your life, not mine.

But don’t throw philosophy away, or dismiss it as something silly and useless just because it is too abstract for you, or if the things you learn seem to have no application to the world. Many of the things we study in school seem to have no application, but that’s only because we lack the creativity and imagination to see how they are relevant.

Many of us may not have the opportunity to see philosophy in action, but we shouldn’t mistake that to mean that it’s nont making a real impact on the world today. Philosophy is in action, often behind the scenes.

Let me end the discussion with something very real. So far we’ve talked about philosophy in governance and the private sector. What about one’s personal life?

As it is now, I am 29 years old and married. And I can tell you that as we get older, we carry more responsibilities. And sometimes this leads us to difficult situations, where we have to choose between options that are not ideal at all. These options may affect only you, or it may affect other people in your life, e.g. your parents, your partner, your children.

Soon, you will have to ask yourself difficult questions: what should I study after I graduate from Raffles Institution? What should I do with my life?

When you go out to work, you will have to deal with the same question: what should I do with my life? Maybe you have to ask questions like, should I leave this high paying job that’s making me miserable for a low paying job that might make me happier? Soon you’ll be confronted with questions like: what do I do with my time and my money?

These are real questions and they can be very painful and difficult to answer. Sometimes we don’t even know the answers, and that can be incredibly frustrating.

The philosophers themselves have tried and are still trying to answer these kinds of questions. I can tell you that their answers don’t always work for me. Nonetheless, the value lies in nlearning about their thoughts. These thinkers have given me a broader perspective to problems, and they have certainly helped me make better decisions. Moreover, my philosophical training has helped me to make painfully difficult yet sound decisions from time to time.

I have friends who appreciate the fact that I can think through these problems clearly for them, and they come to me to help clarify their thoughts and problems.

It’s fine if you don’t intend to do great things to change the world. It’s fine if you are passionate about other things in life and you’d rather focus your energy on them.

But the point I want to make is this: be sure to have a good dose of philosophy in your life. Whether it’s a big dose or a small dose, take it seriously. It will help you in your personal life and in your work.

And if you hope to do great things in the future, good for you. Philosophy will provide you with the skills and content to help you achieve it.

Thoughts About the Ethical and Societal Implications of Hi-Tech Development

Tomorrow is a big day for me. I’ve been invited to speak for a conference jointly organised by the Financial Times and Nanyang Technological University’s (NTU) Institute on Asian Consumer Insight.

screen-shot-2015-10-07-at-8-38-06-pm
More information about the event can be found here: https://live.ft.com/Events/2015/FT-ACI-Smarter-World-Summit

I’ve been asked specifically to talk about philosophical issues related to artificial intelligence, robots, home automation, and other emerging technologies of the future.

screen-shot-2015-10-07-at-8-53-21-pm
Screenshot of the panel discussion I’m in. The event description says: “As we move into an era of driverless cars, virtual financial advisers, and robo-waiters and waitresses, the business environment – and more broadly, society in general – is changing at an incredible pace. What will the jobs of the future look like, and how should firms be preparing to adapt? For B2C firms, how do different customer segments generally react to adopting new technologies? Is there an optimum way to phase in technological changes? What can be done to minimise any adverse impacts new technological developments might have on society?”

I’m really excited, but I’m also very nervous because it’s a panel discussion with questions thrown at me. I’d be a lot less nervous if it were a talk, where I can prepare and plan in advance all that I want to say.

I can only anticipate what people will ask me. So, in this blog post, I’ll write all the things that I’ve prepared to say for tomorrow. I can only hope, with fingers crossed, that they will ask me questions along these lines. (This post is very raw, but I will come back to edit it after the event)

1. What is technology?

It’s easy to forget that technology is a tool that humans use as a means to fulfill a human purpose. It is designed by humans ultimately for humans, by exploiting either natural or social phenomena to achieve that function. There are two sides to technology: the “hardware,” referring to the physical things that will exploit the phenomena; and there’s the “software,” or the concept/logic that arranges and organises the “hardware” to fulfil that particular purpose. On the most basic level, it is the humans and our minds that function as the “software” of technology, manually controlling these tools to achieve what we want. On the more sophisticated level, it is the computer code that controls the computer system(s) to achieve a desired human effect.

On another level, we can understand technology as the collection of devices and practices that shape our culture. Technology is pervasive. It is present in our homes and in our work. It is what we use nowadays to get from place to place, and it is what we use to communicate with people. It is involved in giving us the food we eat, and the water we drink. Technology is everywhere, used in almost every single aspect of our lives to fulfil our human purposes. Technology is an essential component of human society and culture. We create the technologies that shape our culture. And it is this culture, which in turn, shapes the way we think, perceive, value, act, and respond to the people around us. Technology changes lives, for better or for worse, whether big or small. The very introduction of any piece of technology into a community will forever alter the path by which the community’s culture develops.

Technology is a tool which has the power, not only to help us achieve our needs and wants, but it has the power to shape our needs and wants, and how we understand ourselves and our role in the world. There is always a feedback loop between technology and humans.

2. Can technology save the world?

(I’m using the term, “save the world,” in particular because of the salvation narrative used to portray technology by some organisations or peoples. By saving the world, I’m referring to complex human/societal problems, or global challenges that confront nations from East to West, e.g. solving global warming, famine, political crisis, wars, etc.)

It’s interesting how many proponents of technology speak about technology as if technology (in the broadest sense of modern technology) can save the world, can solve some of the most pressing problems of the world, can make the world a better place for ourselves and for our future generations.

Some may cite the example of the atomic bomb as both a tool useful for international peace. It was what made Japan surrender, and it is what keeps the balance of power around the world. But some historians have pointed out that this is not true. It was other human factors, other human concerns, that led Japan to surrender. The Japanese were still more than ready to continue fighting even after the atomic bomb was dropped on them. This is one of many other examples of history where technology, no matter how great or horrifying it may be, does not save the world.

And of course, technology cannot and will not save the world. It is but a tool. And as tools, its effectiveness is dependent on the people using it. Tools are only instrumental to solving problems. Human problems, with all its complexities and complications, will remain human problems regardless of the amount of technology we throw at it. It would be naive to assume that technology – as a tool – will save the world.

But if we consider the impact technology has on culture, with its power to transform cultures, perceptions, thinking, and values, we may get a glimpse by which technology is the facilitator to “saving the world,’ or more accurately, in playing an instrumental and effective role for humans to resolve human/societal problems.

Let us consider the example of a bridge. We can build a bridge to connect one town to another town separated by a river. But in doing so, the bridge – like a catalyst – generates new means and opportunities for human interaction and for the exchange of ideas and cultures. The bridge is an instrumental means that becomes part of other human objectives. Over time, the interaction between the two towns will lead to transformations of their communities, transformations of their overall cultural outlook, ideas, production and economy.

Other influential technologies have the power to transform cultures as well. Of course, technology can transform culture for better or for worse, depending on how and what the technology facilitates and is instrumental for. But this is perhaps, for us, a clue by which we can understand the world-saving potential of technology – of its impact on culture as a whole, as an indirect means for achieving a “world-saving” effect.

3. Problems arising from technology are mainly human problems

One of the things we don’t expect from technology is that it generates more human problems than technical problems.

Why is it that more human problems come about? It goes back to the impact technology has on culture. As mentioned earlier, depending on how a piece of technology functions as an instrumental means for other broader human purposes, that technology can transform culture for better or for worse. Of course, this is seeing the situation too simply. The changes are better in some ways, and worse in other ways. Smartphones have created opportunities for us to interact with one another in so many wonderful ways, but it has also facilitated human laziness in so many other ways.

We need to recognise that many of the technological problems people are complaining about are actually human problems underlying these complaints. These are problems that will not be solved with more technology. A lazy person, for example, will continue to be lazy and exercise his laziness over all the technological tools in his possession (this I speak from experience). No amount of productivity tools will solve the problem.

The solutions are to be found in social, political and even ethical means. But perhaps part of the difficulty that we are facing now is that the rate of technological development is so fast, that our cultures are transforming faster than we can make sense of it, or to even identify the set of problems and solutions to them. This is perhaps something that we need to be aware of as a first step towards a more tangible solution.

4. Technology changes expectations

If we look at the history of technology, inventions like the cleaning appliances and computers promised to free up our time to pursue leisure or other meaningful activities. Instead, the complete opposite happened.

Appliances like the vacuum cleaner were supposed to reduce the time and effort required to clean the house. But it led to increased expectations of cleanliness. If you have a cleaning machine that cleans more effectively, how is it that your house is dirty? And with greater advances in cleaning technologies, the expectations continued to rise. What is interesting is that the concept of the housewife as one who looks after all the cooking and cleaning of the house, is a very modern conception born as a result of such cleaning appliances. Before that, women were working from their homes, involved in farming or textiles, while they tended to cooking, cleaning and child raising. But it was the increased expectation of a clean house, that made them so busy with cleaning, so busy trying to live up to the new expectations of a clean house, that they became too busy to work.

The same thing goes with computers. Before computers became the mainstream tool of productivity, they were marketed as a more efficient and productive means for work. You could save time working, so that you can devote more time for leisure or other meaningful activity. In fact, John M. Keynes predicted that in the future, we would only work 15 hours a week because technological advancements would have made our work easier. But all these didn’t happen. Why? Because our expectations of work had changed. If one employee could do the same amount of work in the less amount of time, it didn’t make sense for the employer to hire 3 employees. He could fire the other two, and let that one employee do the work of 3 people. And of course, if you could do the same work in a shorter time, you could also do the same work at a much higher quality in a short time too.

Machines are, of course, almost flawless in its operation and highly efficient, able to work long hours (or 24/7 even) without needing time to rest. That we use these systems so regularly at work and at home, it’s easy for such machine-thinking to leak into the way we perceive ourselves and others. Not only are we expecting people to do more work in the same amount of time, we have a tendency to demand that we work like machines.

This thinking is so prevalent that we find ourselves expressing it in our conversations from time to time. Here’s one:

“It’s so easy to forget that we’re not machines, that we need to rest.”

Sounds familiar?

We see this machine-like requirement present sometimes in our hiring processes. We want people who are productive, efficient, least prone to error, etc. In short, we want someone as perfect as a machine!

It is interesting that future, emerging technologies are promising the same promises as the technologies before them – that we will be more productive and save time, that we will have more time for leisure and other meaningful activity. But history has shown time and again, that this is not the case because our expectations change, what we expect of ourselves and others have changed.

The real issue we need to consider is how AI, automation, etc., will change our expectations. Will it become more and more unrealistic? Has our society, perhaps, increased expectations more than our technology and people can currently support it? I’m saying this because in many big cities, and big organisations, the expectation to work long overtime hours has increased tremendously.

More importantly, we will humans expect ourselves to behave more and more like machines, and have less room for us to express our humanity? No room for error, for slowness, etc.? Are we creating a meritocracy based on machine-like perfection?

So, the issue we need to consider is: when we introduce new efficient and time-saving technologies, do we need to be aware of the way we market them? Is our marketing changing expectations faster than what is sustainable by technological progress and human capacity? Should we consider tampering expectations?

5. If all you have is a hammer, it is tempting to treat everything as nails

Abraham Maslow wrote:

“I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

It is tempting to assume that every problem has a technological solution. And just as how hammering non-nails can be damaging to those non-nail issues, there are problems that arise from such an approach.

Part of the problem stems from the whole problem-solving approach. While it is useful for developing technologies to solve physical problems, it is trickier when it comes to human problems, to social problems. Human problems are complex and multi-dimensional. To solve the problem in a way that can be addressed by technology requires, first of all, that the problem be defined in a way that can suit a technological solution. This approach by reduces the complexity and richness of human problems. So while technology can be developed to addressed the defined problem, it ignores all related issues.

Here is an analogy to highlight another problem. A plumper is able to repair my toilet plumping because he has an idea/understanding of what a working plumping is. But what about societal/human issues? Can one develop a comprehensive idea/understanding of what the end solution is or should be? It is not possible. What we can envision is limited, and while we may develop a solution in that direction, it once again ignores everything else. This can and will lead to a lot of unforeseen consequences.

The problem of course, is that with this attitude of treating everything as nails, when things go wrong, the temptation is to invest more money on more technological solutions.

The underlying issue is this: Technology cannot save the world, nor can it solve all problems. It is a tool. The narrative we have about technology’s potential is highly problematic. Human problems must still be resolved by humans, by communities, and by a rich understanding of what it means to be human and the ways humans can flourish where they are.

We should reframe our problem-solving narrative to: humans can make a difference with the assistance of technology. It is not technology alone as if it has super miraculous powers, but humanity assisted by technology, humans using technology to bring out the best of other humans.

I think, it is essential to ponder on the complexities of humanity with close collaboration with the humanities and social sciences. This can and will lead us to richer understandings of what problems are, and how we can go about resolving them, and where applicable, with technology.

6. How far does the technological/robotic revolution have to go?

I will make a very provocative claim here, the point of which is to make you pause to ponder the extreme opposite view, so that we might find a balance in your own way.

Technological/robotic revolution can take a rest. There’s no need for it to go any further.

Bertrand Russell, after his long tenure of teaching in China, returned to the UK with a deep reflection about the problems of the West. He wrote – and this really resonates with me:

Our Western civilization is built upon assumptions, which, to a psychologist, are rationalizings of excessive energy. Our industrialism, our militarism, our love of progress, our missionary zeal, our imperialism, our passion for dominating and organizing, all spring from a superflux of the itch for activity. The creed of efficiency for its own sake, without regard for the ends to which it is directed, has become somewhat discredited in Europe since the war, which would have never taken place if the Western nations had been slightly more indolent. But in America this creed is still almost universally accepted; so it is in Japan, and so it is by the Bolsheviks, who have been aiming fundamentally at the Americanization of Russia. Russia, like China, may be described as an artist nation; but unlike China it has been governed, since the time of Peter the Great, by men who wished to introduce all the good and evil of the West. In former days, I might have had no doubt that such men were in the right. Some (though not many) of the Chinese returned students resemble them in the belief that Western push and hustle are the most desirable things on earth. I cannot now take this view. The evils produced in China by indolence seem to me far less disastrous, from the point of view of mankind at large, than those produced throughout the world by the domineering cocksureness of Europe and America. The Great War showed that something is wrong with our civilization; experience of Russia and China has made me believe that those countries can help to show us what it is that is wrong. The Chinese have discovered, and have practised for many centuries, a way of life which, if it could be adopted by all the world, would make all the world happy. We Europeans have not. Our way of life demands strife, exploitation, restless change, discontent and destruction. Efficiency directed to destruction can only end in annihilation, and it is to this consummation that our civilization is tending, if it cannot learn some of that wisdom for which it despises the East.

(Bertrand Russell, The Problem of China, Ch. 1)

From a philosophical point of view, it is precisely because we have a linear conception of time and a linear narrative of progress in understanding and control of nature/universe that we assume that there should be a need for greater progress and development in our technologies.

Should things get better? Sure, why not. Should things be more convenient? Sure, why not. But why do we need things to get better, to be more convenient?

Why the discontent? Why do we not learn to accept things the way they are? I’m not saying this is what we should be doing, but I’m saying we should at least stop and ponder on this question.

Modern science and technology has given us the facade that we are in full control of our lives and destinies, and as long as we can arrange live in a certain way, we can achieve happiness. But it is interesting that this is a view that is fairly recent! But if you go back a few more centuries, you’d find that the philosophers of East and West have said that happiness/contentment can be achieved anytime, even now.

7. What is the potential of AI to replace jobs that we currently consider could never be done by a machine or an algorithm?

The way AI is progressing, I believe AI could very soon replace many low-level jobs.

But of course, the issue is whether companies are willing to invest huge sums of money on these AI systems. In some sectors, mass foreign labour is still cheaper than investing in new technologies that require far less manpower. There is just little or no incentive to switch over.

In this respect, the technology may be there, but there are other social/economic/political factors that would stand in the way of such adoption.

8. Who is responsible if a driverless car kills someone? What if an investment decision by a virtual financial adviser goes wrong?How can humans best adapt to ensure that machines are serving them and not the other way around?

One problem with the way the question is framed (“How can humans best adapt to ensure that machines are serving them and not the other way around?”) is that we speak of machines as if they have agency to control us. This in itself highlights a particular outlook that we have. It’s always so easy to push all responsibility to the machines.

In the recent Volkswagon robot accident that killed one man, the prevalent discourse was that the man was killed by a machine, the time has come where the machines are out to get us.

We can also talk about simple day-to-day activities. You try to do make a special arrangement with a particular organisation, and the first thing y0u hear is: “Sorry, I can’t do that, the system won’t allow it.”

Surprisingly, many of us are willing to accept this excuse, as if machines have full control. Or rather, it’s because we have difficulty taming these machines that we feel that the machines are in control.

It is precisely because we are so ready to give up all responsibility to the machines that we feel this way.

It is also this narrative that makes us feel that there isn’t anyone responsible if a driverless car kills someone, or if a virtual adviser gives the wrong financial advice.

What I want is to turn our attention to the developers. I’m not saying that we should hold them all accountable for everything.

Rather, these debates are problem today because of the way we have framed it. I think we need to have a more design-oriented, design-focused conception of safety and responsibility.

It’s not yet in our culture to develop responsible coding or responsible developing. I think what is essential is a paradigm of ethical design and ethical development, one that ensures not only that safety is given priority in development, but that the technologies are empowering. There are a good number of badly designed technologies that are so dehumanising, too focused on the function that it strips/robs the person of his/her humanity, and it leaves them feeling alienated or disenfranchised. And of course, in many ways, this leaves us feeling enslaved by technology because there isn’t much that we can do. It’s really about the design. Good design is humanising, and leaves people feeling empowered to embrace human goods. This includes robots too. We can design robots in ways that can be empowering and humanising to humans. It’s a question of whether or not we include these considerations into the design process, rather than focusing purely on function.

9. Is some of the Elon Musk, Stephen Hawking stuff about machines killing us etc., overdone?

For context, Elon Musk said:

“I don’t think anyone realizes how quickly artificial intelligence is advancing. Particularly if [the machine is] involved in recursive self-improvement . . . and its utility function is something that’s detrimental to humanity, then it will have a very bad effect. … If its [function] is just something like getting rid of e-mail spam and it determines the best way of getting rid of spam is getting rid of humans…

(Read more here)

Stephen Hawking said:

“The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re-design itself at an ever increasing rate… Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

(Read more here)

A joke question that we should ponder is: If full self-improving AI technology is so scary, why are we even developing it? Why not spare the human race by not doing it?

I don’t have much to say in answer to this question. Here’s what I’ll say:

Of course, it is natural to fear what we do not know. After all, we will not be in control of self-improving AI technology, so we cannot predict what it’ll do to us, either directly or indirectly.

A few things underlying Musk’s view of AI (and that of many Hollywood movies). (1) One, is that humans are so bad that a more intelligent AI would need to eliminate us. And (2), that a more intelligent AI would see that we are a threat to its existence or to the survival of the planet, and thus must be eliminated.

I think a lot of these are projections of our insecurities. That someone or something better than us will take over and get rid of us. It is not necessarily the case, and it might be possible to forge friendships with them. Of course, some of us may prefer to see the whole thing as a power struggle. In which case, the Chinese perspective might be worthwhile: always keep your friends close, but always keep your enemies closer. A friendship and cooperation, even with the most intelligent being will always be worthwhile.

Hawking’s concern is more credible as he presents such AI beings as competition to our own evolution. If this is how AI materialises in the future, then it is a credible threat. Of course, this assumes that AI would compete with us and our niche for the same things, thereby competition would lead to our elimination.

But above and beyond all these, we really need to turn our attention to the design and development phase. That a large aspect missing from this is a concept of ethical/responsible development and design. Safety has never been a top priority in the history of inventions, until accidents occur. Perhaps it’s time we factor such considerations in our development stages.

10. How should we organise our working lives if lots of work we currently do is taken care of by machines? Will there always be new work created just as old work is destroyed, will we have to work shorter hours, or will it mean that some people work long hours (and are paid well) and others struggle to find work at all. In other words, will increasing mechanisation increase inequality?

I once attended a talk where the projection is that 15 years from now, if nothing changes, unemployment will be very high because the rate of technological development is so rapid, that people will not only lose their jobs because they are replaced by machines, but also because people do not have the time to learn new skills in such a short period of time to operate the new systems. Possibly the younger generation will have a better edge in learning these new systems much faster than us.

Of course, there are many other political, economic and social factors at play, that could prevent the widespread adoption of such automation, as evident in some sectors today, that still rely on mass labour because its still significantly cheaper.

But let’s assume that there is widespread adoption. As I mentioned earlier, expectations will increase, and so if history is a good gauge of what the future might be like, there will be new work created, but expectations of work will be greater than before. People will be expected to do the work of yet more people in a short period of time.