AI for Good
Meanwhile, the topic of AI for good comes as a secondary subject, despite the huge potential it has for the world.
Since artificial intelligence is playing a larger role in humans’ lives, it’s time for companies and non-commercial organizations to consider how they can balance innovation with ethical responsibility.
Why AI for good?
“Using AI for good” is a broad term, which relates to applying artificial intelligence to social, environmental, and humanitarian challenges. It applies to all the actions, where AI is used to benefit humanity as a whole and to protect our planet.
Some of the areas where AI is already making a positive impact include:
- advancing medical research to uncover new treatments
- boosting education access by building virtual tutors and early learning disability detection models
- creating strategies for lower carbon emissions
- using predictive analyses for better disaster preparation and response.
All these AI-powered abilities are groundbreaking and were out of reach just a decade ago. However, I must also mention that AI development resources for good come with their set of risks.
Despite noble intentions, developers behind AI for good projects can come across pitfalls like algorithmic bias – particularly, if they’re operating on a small and homogenous batch of training data. When building solutions to global challenges, they might also unintentionally violate local data protections laws. I discuss these and other issues further.
Case studies: Real-world applications of AI for good
Here are a couple of examples of successful AI projects for good.
Merck
Merck, a leader in the life sciences industry, uses AIDDISON™, an AI R&D Assistant to reduce chemical identification time from 6 months to only 6 hours. The platform can generate potential drug molecules, simulate their interactions, and predict outcomes thanks to machine learning and predictive models.
AIDDISON™ truly brings drug discovery to a whole new level. Not only does it minimize the use of resources, but also significantly speeds up the discovery process, letting scientists focus on solutions with the most potential. It’s not an overstatement to say that it allows researchers to push scientific boundaries and improve health outcomes. This innovative approach accelerates drug discovery as well as enhances AI healthcare access by paving the way for faster, more efficient development of treatments.
Uwaga, śmieciarka jedzie
Polish for “Quick, garbage truck approaching”, “Uwaga, śmierciarka jedzie” started as a Facebook group for those who wanted to give unwanted, used household items a chance at a new life.
What began as a grassroots initiative in 2013 has evolved into an NGO with a community that saves around 35,000 tons of used items from ending up in landfills each year.
The movement grew at such an extraordinary pace, that – by 2022 – there were over 270 individual local groups, which made overseeing the community “challenging” (to say the least). Tech to the Rescue helped the foundation develop an GenAI app, which not only makes it easy to post items, but also grants users points for engagement and lets them set up fundraisers.
All these new capabilities make it possible for the community to scale further without constraints – and, as a result, grow their positive social and environmental impact.
What made both of these initiatives successful? There are three main factors:
- Cross functional collaboration – by bringing together specialists with various backgrounds, teams could create versatile solutions that made a real impact.
- Ethical planning – making sure that the platform supports social impact goals and that it reinforces accountability and transparency in its AI use.
- User-focused design – maximizing engagement by putting ease of use and personal impact tracking first.
Aligning AI implementation with ethical concerns
One of the biggest threats behind AI for good projects is that they can backfire and contribute to the problem they were trying to solve.
The Predictive Policing project can act as a strong cautionary tale. Originally, it was built as an AI model that would predict and prevent criminal activity in the least safe areas of cities. Unfortunately, it was trained on biased data for a specific neighborhood, which only reinforced the problem of “over-policing” there instead of detecting crimes across the entire town.
If such issues are undetected, it can not only leave vulnerable groups unprotected, but even worsen their situation.
Another ethical AI concern is that it often handles highly-sensitive data, which could cause harm to people if their information landed in the wrong hands. Think of projects designed to address social disparities, misused by authoritarian regimes to surveil citizens, ridding them of any privacy and autonomy. Unfortunately, ‘AI ethics’ in different regions can vary, which is something we must be aware of when developing social impact solutions.
Preparing a robust roadmap for responsible AI implementation
To use AI responsibly it’s key to build a detailed roadmap. It should include stages like data collection, model development, testing, and deployment. All of which should have guidelines for ethical AI use. For example, you could implement strict data auditing early to spot any bias in your training datasets.
The goals you set should be both ethical and in support of business objectives.
Let’s say you want to deploy AI for hiring. Define goals that prioritize candidate fairness by removing – or at least reducing – biases related to race or gender, along with improving hiring efficiency.
We cannot forget about milestones and benchmarks, too. These should include model performance audits for accuracy and fairness, privacy compliance checks for data handling, and alignment with legal standards like GDPR. For example, if you were to develop a health app, it could include a checkpoint to verify bias in patient data and validate compliance with medical data privacy laws.
Using AI responsibly: A checklist for tech companies
Here’s a list you can use for your AI for good project:
- Data Ethics and Privacy
- Only collect data with people’s permission, respecting privacy rules.
- Remove or hide any personal details in the data to protect privacy.
- Regularly review how data is used to ensure it meets ethical standards.
- Reducing Bias
- Check for any unfairness in the data and adjust it so it’s more balanced.
- Use methods to reduce bias so that the AI is fairer in its decisions.
- Keep an eye on results over time to ensure it doesn’t develop new biases.
- Involving Diverse Perspectives
- Invite input from different groups, including users, experts, and people affected by the AI.
- Get feedback to understand any ethical concerns and address them.
- Document concerns raised during this feedback process and address them thoughtfully.
- Transparency and Communication
- Set up regular reports to keep everyone informed about the AI’s impact—good, bad, or unintended.
- Explain any updates, limitations, and improvements in simple, inclusive language.
- Make sure these reports are easy to access, even for non-technical audiences.
- Clear Explanations and Accountability
- Make sure the AI can explain its decisions in a way people can understand.
- Keep records that show why the AI made certain choices, including what factors influenced it.
- Clearly identify who is responsible for ensuring the AI meets ethical and safety standards.
- Ongoing Monitoring for Safety
- Regularly check that the AI is working as expected and safely.
- Include ways to turn off the AI or override it if it starts to cause issues.
- Test the AI to ensure it works well across different real-world situations.
- Positive Social and Environmental Impact
- Consider how AI might affect society and the environment.
- Ensure the AI aligns with the organization’s values and has a positive impact on the world.
The future of AI for good: Trends and innovations
In terms of AI for good trends, we observed the following:
- The use of no-code solutions – platforms like Bubble or AppGyver allow non-tech experts to create AI-driven applications for social good (they also reduce AI project costs). For example, an NGO could use a no-code solution to automate volunteer coordination or donation tracking.
- AI for environmental & social governance (ESG) – AI could also come in handy to monitor environmental impact like tracking deforestation through satellite imagery, or improve social welfare programs by analyzing community needs.
- AI and human decision making – AI plays a vital role in decision making. For example, it could help with assessing crisis data. However, it should never be treated as the ultimate decision maker. Human involvement is and always will be critical to ensure ethical considerations are met.
AI for good: Balancing the power of technology with ethics
Our Tech to the Rescue initiative at Netguru is living proof that companies can not only develop commercial AI products, but can also work towards building solutions that can benefit the world as a whole.
We can all contribute to global struggles – be it reducing emissions, granting access to education, or minimizing the impact of natural disasters. Those who develop AI have the means to make a particularly positive impact on the world.