It's not a simple thing, says Google's artificial intelligence researcher, but the world has an opportunity to get it right as the software explodes in popularity.
It's as though we were given a second chance on how we use this technology to dismantle some of the biases we see in society, he said.
Her optimism comes as the tech industry and almost every other sector have spent the year abuzz about AI's promise and perils.
Some consider technology as a transformative tool that will revolutionize everyday life and bring speed, efficiency, and solutions to some of the biggest challenges in the world.
Some, including Tesla owner Elon Musk and Apple co-founder Steve Wozniak, warn that progress in the technology is moving too fast and guardrails are needed before wide-scale deployments.
Geoffrey Hinton, the British-Canadian computer scientist widely considered the father of AI, is so concerned with the technology he left Google to more freely discuss the dangers that he has said include bias and discrimination, joblessness, echo chambers, fake news, battle robots and existential risk.
Singh acknowledges that AI is not without risks.
She even seen them firsthand, she said.
He added that if you ask an AI model to generate an image of a nurse, if you ask an AI model to produce an image of a nurse, it will often return a woman, while a request for an image of a CEO typically brings up a white man and a question for a software engineer delivers racialized men.
Singh said the prime minister had resigned without a word of his explanation.
Some of that work has come from studying skin tones, because dark and medium tones can't be deciphered by computer vision systems, which allow computers to'see and understand' images of people and environments.
Singh said that there was no need for India to develop nuclear weapons, but said it was important for India to stay close to the world.
To break the cycle, Google collaborated with Harvard University professor Dr. Ellis Monk to develop an open-source skin tone scale that can detect darker tones and be used by AI models to reduce biases.
It's already been used in its camera detector technology, on Pixel phones and in search tools, to diversify results by bringing up a range of skin tones and hair textures.
Media Understanding for Social Exploration, a study of 12 years of American television shows, has been carried out by the company.
Their findings were'stark', Singh said.
In the past 12 years, there was a slight increase in screen time given to people with dark and medium skin, while those with lighter skin had a slightly lower amount of time on screen.
The work is at a pivotal moment for AI. Top tech firms are competing for billions of dollars in the industry to develop and launch the most advanced technology, competing with each other for billions of dollars.
Google forecasts that generating AI could boost the economy by $210 billion and save the average worker more than 100 hours a year.
Google leaders accompanying Singh said they're already using AI to enhance breast cancer detection, sequence genomes and develop systems that medical clinicians can plug questions into to generate health guidance.
Shopify is using the technology to reduce search abandonedness, while Canadian National Railway Co. is building a digital supply chain platform with automated shipment tracking, Google workers said.
Google's search engine, maps and cloud storage are anchored by AI, which is the basis for Bard, a competitor to ChatGPT, which has yet to launch in Canada.
Google Cloud's Canadian operations director, Sam Sebastian, said it would make its Canadian debut'very soon,' but did not provide a specific timetable.
Instead, he said AI was a technological shift that was unlike anything he's seen in his 25 years working in the industry.
The transformative nature means that anyone in the space needs to balance AI's potential with its risks, he said.
Google has made clear commitments to adhere to a number of principles as it explores AI.
It has also said it will not design or deploy AI technologies that cause or are likely to harm, including weapons or systems used to inflict injuries. It also said that it will not work on artificial intelligence-based'surveillance violating globally accepted norms' or systems that will contravene laws and human rights.