RT Youth Power Fund – Detroit Heals Detroit and Encode Justice

Episode 109



Who We Are – Our Mission and Vision


Detroit Heals Detroit exists to foster healing justice for Detroit youth in which they are able to transform their pain into power. With a goal to combat trauma, we use healing centered engagement to share our greatest vulnerabilities with the rest of the world while simultaneously working to dismantle oppressive systems for marginalized Detroit youth.


Our organization was created by Detroit youth for Detroit youth, between the ages of 12-21, who have been walking and developing in a world that has sought to silence their pain. Exposure to trauma has a profound impact on cognitive development and academic outcomes and our specific students in Detroit seem to wake up to trauma like its breakfast. We understand that Detroit youth face more challenges than any individual person can remedy so our goal is to help each other heal from the trauma together. We need a voice and access to share our greatest vulnerabilities with the rest of the world, truly giving our trauma a purpose. We will lead community healing and learn strategies to advocate for ourselves and others, to deepen our own and others knowledge, & to illuminate the lives we live and the worlds we are a part of. Speaking, writing and advocating around your trauma is an important and powerful step to healing. When the youth in neighborhoods heal, the community begins to heal. Healing is very much rooted in community and we are committed to leading the movement toward Healing Justice for Detroit Youth and Beyond.

Transforming Pain Into Power

Our 4 Pillars of Impact:-


Healing Justice

A framework that identifies how youth can holistically respond to and intervene on generational trauma and violence and to bring collective practices that can impact and transform the consequences of oppression on our bodies, hearts and minds.  

 Community Transformation

Engaging in a deep level of collaboration to seed transformational thinking and action. Using community transformation that focuses on the intersectionality of culture, race, gender and class and create a space where healing is experienced collectively.

Youth-Led Organizing

A burgeoning movement that empowers young people while simultaneously enabling them to make substantive contributions to their communities. Using organizing strategies to alter power relations and to identify, advocate for and instigate change on critical issues that are having an impact on our communities and broader society

Liberation & Access

Educate, engage and empower Detroit youth to Liberation for racial, social & economic justice. Our distinct goal is to build access for our peers: Access to healing, Access to literacy, Access to liberation and Access to new possibilities.

Ie The Mission and The Values quoted above.

Emerging with Wings

“Healing justice is active intervention in which we transform the lived experience of Blackness in our world. We heal so that we can act and organize.”

― Prentis Hemphill

Our Purpose

Current Projects

Headquartered in a multi-million-dollar command center that opened in 2011, Detroit Public Schools Community District.

Community Clean-up


Join us for a Community Clean-up in collaboration with Kenship! We are doing this cleanup to prioritize community care and the sharing

Movement Trauma Free Therapy Sessions


It’s BIPOC Mental Health Awareness Month!

this is an old poster from 2020, but included it for informational purposes, as it was still showing on their website.


Black Power, Resistance & Freedom Film Festival


More Projects shown on the website.


Healing Hub Services

  • Tutoring Support (after-school)
  • Free Computer Lab and Library
  • Behavioral Support (If your child is struggling with managing their emotions, send them to us for a cool down. We have a lot of resources and equipment to assist them).
  • Healing Circles (Bi-weekly)
  • Community Space for youth to hang out, play basketball, video games, enjoy a movie or light snack.
  • Washer & Dryer Services
  • Community Pantry/Fridge
  • Community Closet
  • Community Garden
  • Entrepreneur Markerspace
  • Community Dinners
  • Free Food Giveaway
  • Rest & Resistance Room (If you just need nap to escape from the Hustle Culture, come rest in our comfty beds)

**Residents in the 48205 get a discounted rate on our space rentals.

We are grateful to be able to service our community. 

Address: 19510 Alcoy Ave. Detroit, MI 48205

Rent Our Space


Healing Hub Basement Rental


Healing Hub Backyard Rental


Living Room/Dining Room/Kitchen Rental


Healing Hub Bed Rental


Summer Camp Registration

Summer Camp Registration

This was too late to enter into the podcast or the article.  It took place in August 2023. It may be useful to find out dates of similar events coming up, if you are interested.

Support Healing Justice for Detroit Youth

If you would like to get involved in our efforts to help Detroit Youth transform their pain into power or if you have any general questions please feel free to contact us below.



Encode Justice


Encode Justice is mobilizing communities for AI aligned with human values.

Image credit: NBC News.



Artificial intelligence is going to change the world, and it has serious implications for civil rights and democracy. Here’s a compilation of videos, links, and resources to help you learn more:

Terms and definitions

Artificial intelligence (AI):

A set of technologies with the ability to mimic human judgment by observing the outside world. Refers to intelligence and cognitive processes demonstrated by machines rather than humans or animals.


A set of rules given to a computer program to help it carry out a procedure. A lot of the algorithms that fall under the umbrella of AI are designed to make decisions and predictions independently. Algorithms are sometimes referred to as “models.” An algorithm takes in a set of inputs and produces an output.

For example, I could teach an algorithm to estimate the value of a house based on other factors, like the quality of the schools nearby, the median income of the surrounding neighborhood, the values of the neighboring houses, etc. By feeding a computer program thousands of example houses where you already know the house value and also have information for all those other input factors (this is called training data) for it to learn from, it’s able to use complex math to determine the relative influence of each of those factors on the final house value and essentially create a formula that it can apply in the future. Then, you could give the algorithm 1,000 houses that you don’t know the price of (this is called testing data), and it would be able to make a pretty accurate judgment using the knowledge that it extracted from the training data.

That’s a highly simplified explanation of how algorithms like Zillow’s Zestimate feature work! Memorizing this exact process isn’t that important; you just need to remember the idea that there is usually a large amount of training data (“big data”), inputs, and an output.

Example: The TikTok For You Page algorithm analyzes what types of videos you enjoy the most, measuring “enjoyment” via quantitative factors like how many seconds you stay on a video before swiping away, in order to recommend more videos similar to that. This is also similar to how the Netflix recommendation engine works.

Machine learning:

An AI technique through which an algorithm gets more and more accurate as it gets more and more experienced. It continues to improve its decisions as it goes.

 The harms


That all sounds cool and non-threatening, but when those algorithms are used to make high-stakes decisions, there are huge risks. We often think of technology as perfectly scientific and objective, but this is not the case. Because AI is trained on real-world data, it can pick up on real-world inequalities. It’s just like a child, starting off as a blank slate but subconsciously adopting the bias and prejudice of the older humans around them.

For example, if you’re teaching an algorithm to identify which neighborhoods might be sources of future crime, the historical data that it analyzes in order to make predictions is going to reflect the fact that law enforcement has a larger presence in some communities than others (especially Black, Brown, and low-income areas), even if those communities don’t actually commit crimes at higher rates. Black and Brown people are also disproportionately stopped by police and are arrested for marijuana-related offenses at higher rates despite approximately equal rates of usage.

Well, when you add AI to the picture, it’s going to absorb all of that racism. That ends up creating a feedback loop of discrimination, because the algorithm will learn from all these past instances of the police specifically targeting Black neighborhoods and then continue to recommend that the police target Black neighborhoods. Trying to solve a problem or end a pattern of human-driven discrimination using artificial intelligence almost always backfires, because algorithms are designed to imitate human behavior, not deviate from it.


Encode Justice Workshops

You can request for a workshop.  Details on the website.  This is also on the covering page for Workshops, which may cause you to enquire further.

Encode Justice AI Ethics Workshop Program

Updated April 2023

Questions? Email us at nardos@encodejustice.org and vidya@encodejustice.org.

Who we are: Encode Justice is an international, youth-led organization fighting for the ethical use of artificial intelligence through policy, advocacy, research, and education.

What we’re offering: Either a standalone workshop or series of workshops exploring the ethics of artificial intelligence, the intersection of social justice and computing, and topics like algorithmic bias and data privacy through the lens of specific applications.

Goal: To train the next generation of developers and lawmakers to think critically about how technology will shape our daily lives and reinforce structural inequality while understanding the need for better governance of AI systems.

What we’ve accomplished: Since the launch of our workshop program, we’ve impacted 15,000+ students across New York City, Washington, D.C., San Francisco, Philadelphia, London, and more. Students reported a ~99% satisfaction rate via our post-workshop survey. ~90% of students gave us an accessibility rating of 4 or 5 (on a scale of 1-5).

Available lesson plans (more are being developed):

  • Introduction to AI, AI Ethics, and AI Bias
  • AI ethics + healthcare
  • AI ethics + policing/surveillance
  • AI ethics + media/elections/democracy
  • AI ethics + criminal justice/sentencing
  • AI ethics + climate action
  • AI ethics + HR/hiring
  • AI ethics + warfare

Additional details:

  • Free of cost for you and our participants
  • Around 1 hour in length (can be modified according to your needs)
  • 100% virtual workshops adapted for distance learning, or in person available as well
  • Emphasis on comprehensibility, interactivity, and student engagement


Closing Remarks

Once again I have found names of individuals from this organisation, being part of discussions with another one of the 26 organisations.  Expertise and knowledge appears to being shared.  So inspirational.  Hope you found this of interest, to make you want to explore further.  The list is very long supplied by Encode Justice, including video links and documentation.  That in itself will hopefully impress you regarding the amount of work and research and development they are putting into this in an effort to ensure appropriate safeguards are in place in the future

Ivy Barrow

17th September 2023

Reference Sources






https://twitter.com/encodejustice/status/1682383335337631746         Re White House involvement/mention



The following reference sources provided by Encode Justice on their website.  I have copied and pasted them all here, for your convenience.  I have listed them all because they are all thought provoking.

Videos to watch:

        “We’re Training Machines to be Racist. The Fight Against Bias is On”: https://www.youtube.com/watch?v=N-Lxw5rcfZg&ab_channel=WIREDUK

“Race, Technology, and Algorithmic Bias | Vision & Justice || Radcliffe Institute”: https://www.youtube.com/watch?v=Y6fUc5_whX8&ab_channel=HarvardUniversity

“Machine Learning and Human Bias”: https://www.youtube.com/watch?v=59bMh59JQDo&ab_channel=Google


Criminal justice, policing, and surveillance

“With AI and criminal justice, the devil is in the data”: https://www.aclu.org/issues/privacy-technology/surveillance-technologies/ai-and-criminal-justice-devil-data

“How our data encodes systematic racism”: https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion/

“Police Shouldn’t Tag Students as Potential Criminals”: https://www.brennancenter.org/our-work/analysis-opinion/police-shouldnt-tag-students-potential-criminals

“Predictive policing algorithms are racist. They need to be dismantled.”: https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

“Machine Bias”: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

“Can Racist Algorithms Be Fixed?”: https://www.themarshallproject.org/2019/07/01/can-racist-algorithms-be-fixed

“Facial recognition systems show rampant racial bias, government study finds”: https://www.cnn.com/2019/12/19/tech/facial-recognition-study-racial-bias/index.html (our current campaign is focused on banning facial recognition, so this article and the next few are really important reading)

“Defund Facial Recognition”:


“Black man in New Jersey misidentified by facial recognition tech and falsely jailed, lawsuit claims”: https://www.nbcnews.com/news/us-news/black-man-new-jersey-misidentified-facial-recognition-tech-falsely-jailed-n1252489

“Wrongfully Accused by an Algorithm”: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

“How is Face Recognition Surveillance Technology Racist?”: https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist/

“Cops in Miami, NYC arrest protesters from facial recognition matches”: https://arstechnica.com/tech-policy/2020/08/cops-in-miami-nyc-arrest-protesters-from-facial-recognition-matches/

“Row over AI that ‘identifies gay faces’”: https://www.bbc.com/news/technology-41188560

“Facial recognition AI can’t identify trans and non-binary people”: https://qz.com/1726806/facial-recognition-ai-from-amazon-microsoft-and-ibm-misidentifies-trans-and-non-binary-people/

“Facial recognition software has a gender problem”: https://www.eurekalert.org/pub_releases/2019-10/uoca-frs102919.php

“Google engineer apologizes after Photos app tags two black people as gorillas”: https://www.theverge.com/2015/7/1/8880363/google-apologizes-photos-app-tags-two-black-people-gorillas

“How China’s Government Is Using AI on Its Uighur Muslim Population”: https://www.pbs.org/wgbh/frontline/article/how-chinas-government-is-using-ai-on-its-uighur-muslim-population/

“Courts Are Using AI to Sentence Criminals. That Must Stop Now”: https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/

“An artificial intelligence company backed by Microsoft is helping Israel surveil Palestinians”: https://www.vox.com/2019/10/31/20937638/israel-surveillance-network-covers-palestinian-territories



“Millions of black people affected by racial bias in health-care algorithms”: https://www.nature.com/articles/d41586-019-03228-6

“Health Care AI Systems Are Biased”: https://www.scientificamerican.com/article/health-care-ai-systems-are-biased/

“Racial Bias Found in a Major Health Care Risk Algorithm”: https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/

“AI bias may worsen COVID-19 health disparities for people of color”: https://www.healthcareitnews.com/news/ai-bias-may-worsen-covid-19-health-disparities-people-color

“A biased medical algorithm favored white people for health-care programs”: https://www.technologyreview.com/2019/10/25/132184/a-biased-medical-algorithm-favored-white-people-for-healthcare-programs/

“Artificial Intelligence in healthcare is racist”: https://www.zdnet.com/article/artificial-intelligence-in-healthcare-is-racist/



“Schools are using software to help pick who gets in. What could go wrong?”: https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong

“AI is coming to schools, and if we’re not careful, so will its biases”: https://www.brookings.edu/blog/the-avenue/2019/09/26/ai-is-coming-to-schools-and-if-were-not-careful-so-will-its-biases/

“Researchers Raise Concerns About Algorithmic Bias in Online Course Tools”: https://www.edsurge.com/news/2020-06-26-researchers-raise-concerns-about-algorithmic-bias-in-online-course-tools

“How an AI grading system ignited a national controversy in the U.K.”: https://www.axios.com/england-exams-algorithm-grading-4f728465-a3bf-476b-9127-9df036525c22.html

“Robots May Be Grading Your Kid’s Essays, With Bias”: https://www.wbur.org/cognoscenti/2019/11/12/robo-grading-rich-barlow

“U-M study finds facial recognition technology in schools presents many problems, recommends ban”: https://news.umich.edu/u-m-study-finds-facial-recognition-technology-in-schools-presents-many-problems-recommends-ban/



“A lawsuit against ICE reveals the danger of government-by-algorithm”: https://www.washingtonpost.com/outlook/2020/03/05/lawsuit-against-ice-reveals-danger-government-by-algorithm/

“ICE rigged its algorithms to keep immigrants in jail, claims lawsuit”: https://www.theverge.com/2020/3/3/21163013/ice-new-york-risk-assessment-algorithm-rigged-lawsuit-nyclu-jose-velesaca

“How Automation Bias Encourages the Use of Flawed Algorithms”:  https://slate.com/technology/2020/03/ice-lawsuit-hijacked-algorithm.html

“Immigration decision-making: Artificial Intelligence may violate human rights”: https://www.setzerimmigration.com/articles/immigration-decision-making-artificial-intelligence-may-violate-human-rights/

“Using AI in Immigration Decisions Could Jeopardize Human Rights”: https://www.cigionline.org/articles/using-ai-immigration-decisions-could-jeopardize-human-rights


Elections, news, and democracy

“Deepfake democracy: Here’s how modern elections could be decided by fake news”: https://www.weforum.org/agenda/2020/10/deepfake-democracy-could-modern-elections-fall-prey-to-fiction/

“Deepfakes are coming for American democracy. Here’s how we can prepare”: https://www.washingtonpost.com/opinions/2020/09/10/deepfakes-are-coming-american-democracy-heres-how-we-can-prepare/

“Readers Beware: AI Has Learned to Create Fake News Stories”: https://www.wsj.com/articles/readers-beware-ai-has-learned-to-create-fake-news-stories-11571018640

“The new AI tools spreading fake news in politics and business”: https://www.ft.com/content/55a39e92-8357-11ea-b872-8db45d5f6714

“The next-generation bots interfering with the US election”: https://www.nature.com/articles/d41586-020-03034-5

“Fears of ‘digital dictatorship’ as Myanmar deploys AI”: https://www.reuters.com/world/china/fears-digital-dictatorship-myanmar-deploys-ai-2021-03-18/


Financial services and housing

“AI has exacerbated racial bias in housing. Could it help eliminate it instead?”: https://www.technologyreview.com/2020/10/20/1009452/ai-has-exacerbated-racial-bias-in-housing-could-it-help-eliminate-it-instead/

“Housing discrimination goes high tech”: https://archive.curbed.com/2019/12/17/21026311/mortgage-apartment-housing-algorithm-discrimination

“HUD’s new housing rule has an A.I. loophole that’s bad for America”: https://www.cnbc.com/2019/10/03/huds-new-housing-rule-has-an-ai-loophole-thats-bad-for-america.html

“Will Machine Learning Algorithms Erase The Progress Of The Fair Housing Act?”: https://www.forbes.com/sites/fernandezelizabeth/2019/11/17/will-machine-learning-algorithms-erase-the-progress-of-the-fair-housing-act/?sh=649d521f1d7c

“Redlined by Algorithm”: https://www.dissentmagazine.org/online_articles/redlined-by-algorithm

“Trump housing plan would make bias by algorithm ‘nearly impossible to fight’”: https://www.theguardian.com/us-news/2019/oct/22/trump-housing-plan-would-make-bias-by-algorithm-nearly-impossible-to-fight

“Apple Card algorithm sparks gender bias allegations against Goldman Sachs”:  https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/

“Credit Scores Could Soon Get Even Creepier and More Biased”: https://www.vice.com/en/article/zmpgp9/credit-scores-could-soon-get-even-creepier-and-more-biased

“Is an Algorithm Less Racist Than a Loan Officer?”: https://www.nytimes.com/2020/09/18/business/digital-mortgages.html

“How Algorithms Can Bring Down Minorities’ Credit Scores”: https://www.theatlantic.com/technology/archive/2016/12/how-algorithms-can-bring-down-minorities-credit-scores/509333/


Warfare/National Security

“Stopping Killer Robots”: https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and

“Do Killer Robots Violate Human Rights?”: https://www.theatlantic.com/technology/archive/2015/04/do-killer-robots-violate-human-rights/390033/

“The weaponisation of AI: An existential threat to human rights and dignity”: https://www.giswatch.org/node/6205

“Killer Robots Aren’t Regulated. Yet.”: https://www.nytimes.com/2019/12/13/technology/autonomous-weapons-video.html

“Algorithmic Foreign Policy”: https://blogs.scientificamerican.com/observations/algorithmic-foreign-policy/


Hiring and employment

“Amazon scraps secret AI recruiting tool that showed bias against women”: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

“All the Ways Hiring Algorithms Can Introduce Bias”: https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias

“Study shows how AI exacerbates recruitment bias against women”: https://thenextweb.com/neural/2020/12/02/study-shows-how-ai-exacerbates-recruitment-bias-against-women/

“Here’s why an AI expert says job recruiting sites promote employment discrimination”: https://www.businessinsider.com/ai-expert-job-sites-must-prove-not-exacerbating-inequality-2020-1


The Benefits


Criminal justice and policing

“San Francisco says it will use AI to reduce bias when charging people with crimes”: https://www.theverge.com/2019/6/12/18663093/ai-sf-district-attorney-police-bias-race-charge-crime

“SF DA Gascón launching tool to remove race when deciding to charge suspects”: https://www.sfchronicle.com/crime/article/SF-DA-Gasc-n-launching-tool-to-remove-race-when-13971721.php

“Blind Charging”: https://policylab.stanford.edu/projects/blind-charging.html

“San Francisco prosecutors turn to AI to reduce racial bias”: https://apnews.com/article/7e9305d3bccf4f8caee9960e772079f7

“Researchers say AI tools could make justice systems more just”: https://www.axios.com/ai-court-reporter-justice-system-a71f8bd3-740f-4602-b99a-59686b07873c.html

“AI tools could predict spates of hate crime”: https://eandt.theiet.org/content/articles/2018/12/ai-tools-could-predict-spates-of-hate-crime/

“Google’s New Site Uses Artificial Intelligence to Track Hate Crimes”: https://fortune.com/2017/08/19/google-propublica-artificial-intelligence-hate-crimes/

“A large-scale analysis of racial disparities in police stops across the United States”: https://5harad.com/papers/100M-stops.pdf



“Healthcare is ailing. AI can help”: https://builtin.com/healthcare-technology/healthcare-system-artificial-intelligence

“UF researchers are looking into the eyes of patients to diagnose Parkinson’s disease”: https://www.eng.ufl.edu/newengineer/in-the-headlines/scientists-are-looking-into-the-eyes-of-patients-to-diagnose-parkinsons-disease/

“AI test rules out a COVID-19 diagnosis within one hour in emergency departments”: https://www.healthcareitnews.com/news/emea/ai-test-rules-out-covid-19-diagnosis-within-one-hour-emergency-departments

“AI-supported test predicts eye disease three years before symptoms”: https://www.sciencedaily.com/releases/2020/12/201218112519.htm

“AI can now help us detect disease at its earliest stages”: https://www.wired.co.uk/article/artificial-intelligence-healthcare

“DeepMind’s latest AI breakthrough could turbocharge drug discovery”: https://www.fastcompany.com/90584816/deepmind-alphafold-alphabet-ai-proteins-drug-discovery

“AI-designed serotonin sensor may help scientists study sleep and mental health”: https://www.nih.gov/news-events/news-releases/ai-designed-serotonin-sensor-may-help-scientists-study-sleep-mental-health

“AI gauges the mental health of cancer patients through eye movements”: https://www.engadget.com/ai-detects-mental-health-through-eye-movements-145449742.html

“Digital Tools Are Revolutionizing Mental Health Care in the U.S.”: https://hbr.org/2020/12/digital-tools-are-revolutionizing-mental-health-care-in-the-u-s

“How Technology Can Improve Health Care for Rural Americans”: https://www.ucf.edu/online/healthcare/news/technology-can-improve-health-care-rural-americans/

“How AI is helping fix rural and Native American health challenges”: https://www.healthcareitnews.com/news/how-ai-helping-fix-rural-and-native-american-health-challenges



“Harnessing AI to improve education for everyone”: https://www.shine.cn/news/metro/2012282253/

“The New World Of AI-Based Adaptive Education”: https://www.entrepreneur.com/article/362540

“UCF researchers developing AI to help students with autism”: https://www.fox35orlando.com/news/ucf-researchers-developing-ai-avatar-to-help-students-with-autism

“Texas launches artificial intelligence platform to assist college-bound students with financial aid”: https://www.houstonchronicle.com/news/houston-texas/houston/article/Texas-launches-AI-platform-to-assist-15770561.php

“An AI is Helping Kids on the Autism Spectrum Learn to Code”: https://www.mynews13.com/fl/orlando/news/2020/12/07/an-ai-is-helping-kids-on-the-autism-spectrum-learn-to-code

“Eton pioneers classroom AI to help students improve punctuation and grammar”: https://www.telegraph.co.uk/news/2020/12/25/eton-pioneers-classroom-ai-help-students-improve-punctuation/

“Develop.com Adds an AI-Based Tool that Recommends Courses to Complement Skills”: https://iblnews.org/develop-com-adds-an-ai-based-tool-that-recommends-courses-to-complement-skills/


Elections, news, and democracy

“Microsoft claims its AI framework spots fake news better than state-of-the-art baselines”: https://venturebeat.com/2020/04/07/microsoft-ai-fake-news-better-than-state-of-the-art-baselines/

“Who does A.I. think will win today’s election?”: https://fortune.com/2020/11/03/who-will-win-todays-election-a-i-knows/

“Election polls were a disaster this year. Here’s how AI could help”: https://www.fastcompany.com/90575531/ai-election-polling

“How AI predictions fared against pollsters in the 2020 U.S. election”: https://venturebeat.com/2020/11/06/how-ai-predictions-fared-against-pollsters-in-the-2020-u-s-election/

“Artificial Intelligence Shows Potential to Gauge Voter Sentiment”: https://www.wsj.com/articles/artificial-intelligence-shows-potential-to-gauge-voter-sentiment-11604704009

“MIT CSAIL’s AI can detect fake news and political bias”: https://venturebeat.com/2018/10/03/mit-csails-ai-can-detect-fake-news-and-political-bias/

“How next-gen computer generated maps detect partisan gerrymandering”: https://www.sciencenews.org/article/gerrymandering-elections-next-gen-computer-generated-maps

“AI-drawn voting districts could help stamp out gerrymandering”: https://techcrunch.com/2020/09/04/ai-drawn-voting-districts-could-help-stamp-out-gerrymandering/

“Using AI to detect COVID-19 misinformation and exploitative content”: https://ai.facebook.com/blog/using-ai-to-detect-covid-19-misinformation-and-exploitative-content/


Financial services

“How AI Can Improve Financial Analytics”: https://www.forbes.com/sites/louiscolumbus/2020/07/23/how-ai-can-improve-financial-analytics/

“Artificial intelligence is reshaping finance”: https://www.ft.com/content/c7d9a81c-e6a3-4f37-bbfd-71dcefda3739

“Banks look at ‘explainable’ AI systems to boost consumer trust”: https://www.rollcall.com/2020/12/08/banks-look-at-explainable-ai-systems-to-boost-consumer-trust/



“Human rights activists want to use AI to help prove war crimes in court”: https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes/



“Here are 10 ways AI could help fight climate change”: https://www.technologyreview.com/2019/06/20/134864/ai-climate-change-machine-learning/

“How AI Is Helping Solve Climate Change”: https://www.smashingmagazine.com/2019/09/ai-climate-change/

“Artificial Intelligence—A Game Changer for Climate Change and the Environment”: https://blogs.ei.columbia.edu/2018/06/05/artificial-intelligence-climate-environment/

“How artificial intelligence can tackle climate change”: https://www.nationalgeographic.com/environment/2019/07/artificial-intelligence-climate-change/

“California Firefighters Tap AI for an Edge in Battling Wildfires”: https://www.wsj.com/articles/california-firefighters-tap-ai-for-an-edge-in-battling-wildfires-11601544600

“Google’s AI flood warnings now cover all of India and have expanded to Bangladesh”: https://www.theverge.com/2020/9/1/21410252/google-ai-flood-warnings-india-bangladesh-coverage-prediction

“New weather model could increase tornado-warning times”: https://www.icds.psu.edu/new-weather-model-could-increase-tornado-warning-times/

“Scientists want AI to help predict dangerous hurricanes”: https://nypost.com/2020/09/03/scientists-want-ai-to-help-predict-dangerous-hurricanes/



“How Technology Could Revolutionize Refugee Resettlement”: https://www.theatlantic.com/international/archive/2019/04/how-technology-could-revolutionize-refugee-resettlement/587383/

“Just the job: how AI is helping build a better life for refugees”: https://www.ft.com/content/9332fffc-ec57-11e8-89c8-d36339d835c0

“How artificial intelligence is transforming the global battle against human trafficking”: https://www.foxnews.com/tech/artificial-intelligence-global-battle-human-trafficking

“How gun-detection technology promises to help prevent mass shootings”: https://www.fastcompany.com/90388822/how-gun-detection-technology-promises-to-help-prevent-mass-shootings