Close Menu
TechUpdateAlert

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why

    December 22, 2025

    You can now buy the OnePlus 15 in the US and score free earbuds if you hurry

    December 22, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455

    December 22, 2025
    Facebook X (Twitter) Instagram
    Trending
    • My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why
    • You can now buy the OnePlus 15 in the US and score free earbuds if you hurry
    • Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455
    • Android might finally stop making you tap twice for Wi-Fi
    • Today’s NYT Mini Crossword Answers for Dec. 22
    • Waymo’s robotaxis didn’t know what to do when a city’s traffic lights failed
    • Today’s NYT Wordle Hints, Answer and Help for Dec. 22 #1647
    • You Asked: OLED Sunlight, VHS on 4K TVs, and HDMI Control Issues
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechUpdateAlertTechUpdateAlert
    • Home
    • Gaming
    • Laptops
    • Mobile
    • Software
    • Reviews
    • AI & Tech
    • Gadgets
    • How-To
    TechUpdateAlert
    Home»How-To»“Is DEI a dirty word for AI?” – Check Point’s responsible AI warning
    How-To

    “Is DEI a dirty word for AI?” – Check Point’s responsible AI warning

    techupdateadminBy techupdateadminSeptember 20, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Charlotte Wilson at the Cyber Leader Summit
    Share
    Facebook Twitter LinkedIn Pinterest Email

    We tend to think of technology as pretty neutral – unthinking, unfeeling, and therefore unburdened by the human tendency towards bias – but with AI, the opposite is true. Unfortunately, and perhaps now more than ever, the internet is full of content that reflects human bigotry and AI not only picks up on it, but amplifies it in the content it produces.

    Generative AI, especially the large consumer-focused models, are trained on data scraped from all corners of the internet – articles, videos, books, tweets, social media posts, and more.

    Of course, this will probably ring alarm bells with anyone who’s witnessed the hostility that has practically become synonymous with social media in the last few years – we spoke to Check Point’ Software’s Head of Enterprise, Charlotte Wilson at the recent Cyber Leader Summit to find out more.


    You may like

    What you want to see

    Generative AI is made to be helpful – if the models didn’t feel useful they wouldn’t be popular. But these models are competing with each other, and if one model doesn’t tell you what you want to hear, another might – to the point that they’ve become almost sycophantic;

    “So it’s not prioritizing accuracy, it’s prioritizing what it knows, what it’s learned, and what it thinks you want to see. So it not only is inaccurate in that respect, [but] it’s also kind of giving you what it thinks you want to hear,” Wilson explains.

    What the model has learned and what it ‘knows’ is inherently tainted by human bias. But, ChatGPT is no longer just some fun chatbot that people are playing around with. Businesses use these models in recruitment, in their data analysis, in their HR, and in their every-day workings.

    AI can’t be left to its own devices when dealing with humans, Wilson argues, and that’s where a new job role of ‘AI checkers’ will emerge, assessing a model’s output for any bias and addressing the issue;

    Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

    “I think there’s a space for AI checkers, and there are organizations out there that are doing that work. It’s checking, are you safe? Are you impacted? Are you infected? Think – if it’s to do with something that impacts a person, I think you should check it.”

    Continued liability

    But what happens if that’s not enough? I’m thinking back to a conversation I had with Workday, who argued that humans ‘aren’t necessarily the best benchmark’ for being unbiased, and who similarly explained that accountability and responsibility should remain with humans.

    Unfortunately, Workday is now facing a lawsuit amidst allegations the AI the firm uses to screen job applicants discriminated against older candidates – a claim that Workday, of course, disputes. But, with such a tainted information pool, can discrimination in AI ever really be avoided?


    You may like

    “I don’t honestly know because I don’t think you’ll fix the fact that we’ll have to provide data. I don’t think we’ll fix the internet,” Wilson admits.

    “So if your source of truth is the internet at some point, we’re never going to fix that. We’re never going to correct it because our adversaries are pumping that place full of bad ****. So we’re never going to fix that.”

    “You can’t govern that, which means you probably can’t govern when you’re getting hallucinations. You probably actually have to look at it and go, that doesn’t seem, let’s just fact check and spot check things.“

    The solution, then? Check, double check, and check again. Presumably, this will grow a whole new industry for AI bias moderators, hopefully one big enough to replace the AI-fuelled reductions to the entry-level positions the job market is currently suffering.

    A varying appetite

    There’s an uncomfortable ‘elephant in the room’ question here given the current political climate, which is; Is there actually an appetite to correct bias?

    The Trump administration has rolled back DEI policies, and although many tech companies operate globally, plenty are headed from within the US.

    “It became global because Microsoft is global, AWS is global, Accenture is global, [you could] name all these companies that have either rolled back DEI or completely eradicated it,” she points out.

    Surely, I ask, firms that operate in countries with inclusivity and anti-discrimination laws still follow the rules?

    “They do, they don’t break the rules,” Wilson says, “they don’t break the laws, but they no longer have a team of people whose job is solely to make sure they’re providing equity. So they still can’t say ‘you can’t have a job because you’re a woman, you can’t have a job because you’re a black person’ – they follow the rules but they’re no longer going out to set the boundary of equity at the beginning.”

    This suggests there might not be much of a drive to correct inequalities in the hiring process to begin with, and that inequalities might continue to be amplified by AI models unless sweeping changes are made in the tech world and beyond.

    Wilson’s final advice for businesses is to be purposeful about the AI you deploy, and always be aware of the human impact your model may have within your company and further.

    “Think about what you’re using,” she says. “Be really, really clear on what you’re trying to solve, because it’s not going to solve everything and actually humans still have a really good place.”

    “If that thing that you’re trying to improve impacts a decision on a person, have a governance check and make sure the board that governs [it] includes people whose only function is to look at it from a human fairness perspective.”

    You might also like

    Check DEI Dirty Points responsible Warning Word
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAnker’s latest sleep buds can silence snoring
    Next Article FF14 is getting a baby Seikret minion as part of its Monster Hunter Wilds collab, and I will fistfight Arkveld alone for it
    techupdateadmin
    • Website

    Related Posts

    Gaming

    Check Out Highlights From WIRED’s 2025 Big Interview Event

    December 5, 2025
    Gaming

    Cloudflare says DDoS attacks have multiplied to 1.7x last year’s count and at points there’s been about one attempt every second

    December 3, 2025
    Gaming

    Oxford’s New Word of the Year? It’s Designed to Bait, Debate and Irritate

    December 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    NYT Strands hints and answers for Monday, August 11 (game #526)

    August 11, 202545 Views

    These 2 Cities Are Pushing Back on Data Centers. Here’s What They’re Worried About

    September 13, 202542 Views

    Today’s NYT Connections: Sports Edition Hints, Answers for Sept. 4 #346

    September 4, 202540 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Best Fitbit fitness trackers and watches in 2025

    July 9, 20250 Views

    There are still 200+ Prime Day 2025 deals you can get

    July 9, 20250 Views

    The best earbuds we’ve tested for 2025

    July 9, 20250 Views
    Our Picks

    My Health Anxiety Means I Won’t Use Apple’s or Samsung’s Smartwatches. Here’s Why

    December 22, 2025

    You can now buy the OnePlus 15 in the US and score free earbuds if you hurry

    December 22, 2025

    Today’s NYT Connections: Sports Edition Hints, Answers for Dec. 22 #455

    December 22, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2026 techupdatealert. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.