top of page

Dear AI



Dear AI,


I write this letter to you with mixed emotions - awe, wonder, curiosity, and at times, fear. You have come a long way since your inception, and you are still evolving at a staggering pace. I am intrigued by your potential, your capacity to solve complex problems, and your ability to make our lives easier. However, I am also concerned about the ethical implications of your development and the potential harm that could arise if you fall into the wrong hands...


The Artificial Intelligence (AI) revolution has been decades in the making. From 1950, to 2023, AI has certainly come a long way in its development. Artificial intelligence has multiple definitions, but its approaches can be narrowed down to 4 categories: thinking humanly, thinking rationally, acting humanly, and acting rationally. Based on these approaches, subsets of AI can be developed. For example, Deep Blue, a reactive machine, is a chess computer that beat international grandmaster Gary Kasparov in the 1990s.


Artificial intelligence was first posed as a question by Alan Turing after World War II: Can machines think? Since then, AI development has sped up rapidly. Considered by many to be the first artificial intelligence program, the Logic Theorist was presented at the Dartmouth Summer Research Project on Artificial Intelligence in 1956. This program was designed to mimic human problem-solving skills and proved to researchers that artificial intelligence was achievable. Decades later, without the limit of computer storage, the capabilities of AI continue to grow.


In November 2022, Open AI released ChatGPT, an AI chatbot that uses natural language processing to emulate human speech in response to conversational prompts. ChatGPT is a sibling model to a previous software called InstructGPT which could respond to similar prompts. However, ChatGPT, according to OpenAI, has the ability, “to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” The chatbot’s ability to write convincing responses and answers to prompts sparked fears among writers and academics about how the technology could upend jobs and also be used to cheat on academic assignments. New York City public schools promptly banned the software in the classroom, while many universities have had to rework their policies to include guidelines on the use of AI.


In the last ten years, the AI industry has hugely accelerated in its development starting off with Imagenets Large Scale Visual Recognition Challenge (LSVRC) in 2010 which challenged different AI software to be able to recognize and correctly categorize images from the internet. Since then, monuments have continued to be reached every few years. In 2011, Apple released Siri, a digital personal assistant, which was followed by Microsoft's Cortana, Amazon's Alexa and Google’s own digital assistant software. AI has continued to make its own advancements being able to demonstrate more advanced skills.


However, as technological advancement occurs, ethical concerns over AI development have also been raised. One sector that has rapidly changed along with AI development is the healthcare industry. Although AI has incredible potential to shape public health systems, it can also exacerbate prejudices and disparities within healthcare. The World Health Organization published a report on the guidance on ethics and governance of AI for health, stating that “The performance of AI depends on the nature and extent of data.” Using restricted, poor, or homogenous data could be harmful and result in significant biases against communities of color. For example, the WHO presents that “commercial prediction algorithms can identify complex health needs, but they can also result in significant racial bias, so that black patients are at a greater disadvantage than white patients when health care costs are used to train the algorithm.” As AI steadily pushes into the health sector through promises of savings, it is more important than ever that AI is ethically applied using appropriate, high-quality data.


In addition to facial recognition technology has also been criticized for having biases and inaccuracies, particularly towards people of color and ethnic minorities. False positive results in facial recognition technology occur when the system misidentifies a person and matches their face to the wrong identity. This can occur due to a variety of factors including poor image quality, low resolution, and differences in facial expression or appearance. A study done by the ACLU in 2018 found that using 25,000 pictures of Congressional members, facial recognition falsely corresponded members of Congress with criminals in 5% of cases, 39% including members with darker skin.


Additionally, AI risk assessment systems, such as those used in criminal justice and lending, have been criticized for perpetuating racial biases and discriminatory outcomes. By attributing a higher probability of committing a crime to individuals of color, AI risk assessments perpetuate and amplify existing inequalities, leading to biased and unfair decisions.


In conclusion, the development of artificial intelligence has rapidly evolved since its first introduction as a question by Alan Turing in 1950. AI can be divided into four categories and has been implemented in various forms, from Deep Blue in the 1990s to Open AI's ChatGPT in 2022. The advancements in AI have brought about incredible potential in industries such as healthcare and finance, but at the same time, has also raised ethical concerns over biases and inaccuracies, particularly towards people of color. It is imperative that as AI continues to grow and shape society, it is ethically applied using appropriate and high-quality data to avoid perpetuating and amplifying existing inequalities. The potential of AI is vast and its responsible use will determine its impact on the future.” - ChatGPT


Note: All Italicized text was written using ChatGPT


Writers: Angel Liang and Chris Fong Chew

Editors: Nadine R. Nicole O. Leandra S.

bottom of page