AI Is Everywhere — But Are You Building It Responsibly?

LIKE.TG 成立于2020年,总部位于马来西亚,是首家汇集全球互联网产品,提供一站式软件产品解决方案的综合性品牌。唯一官方网站:www.like.tg
Artificial intelligence (AI) touches billions of people daily. It suggests recommended content on your favourite streaming service and helps you avoid traffic while you drive. AI can also help businesses predict how likely someone is to repay a loan or determine the most efficient routes for distributors to ship goods quickly and reliably.
There is no doubt the predictive capabilities brought about by these AI have helped businesses to scale rapidly. With applications in fields from retail services to logistics and personal finance, the global AI industry is projected to reach annual revenue of US$291.5 billion by 2026. In Asia Pacific, the size of the AI market is also growing. It is estimated to be worth around US$450 million by 2025. But for all the good AI has contributed to the world, the technology isn’t perfect. AI algorithms can cause many business and societal pitfalls if not kept in check.
AI is trained on a large amount of data that has been collected over time. If that data collected shows bias, or is not representative of the people the system will impact, it can amplify those biases. For example, the recently launched BlenderBot 3, a conversational AI, perpetuated negative bias by generating unsafe and offensive remarks during a public demo. Research has also shown that many popular open-source benchmark training datasets — ones that many new machine learning models are measured against — are either not valid for the context in which they have been widely reused or contain data that is inaccurate or mislabeled.
Ways to combat bias in AI
As a result, governments around the world have begun drafting and implementing AI regulations. Singapore has a law focused on accountable and responsible development of AI, while Thailand has most AI initiatives embedded within policies and strategies to strengthen the development of AI-related technologies.
Regulation, when well-crafted and appropriately applied, ensures an ethical AI system that is inclusive and unbiased.
In the meantime, business leaders need to pave the way to a more equitable AI infrastructure. Here’s how:
Be transparent
To increase trustworthiness, many regulations require businesses to be transparent about how they trained their AI model (a program trained on a set of data to recognise certain types of patterns), the factors used in the model, its intended and unintended uses, and any known bias. Policymakers usually request this in the form of data sheets or model cards, which act like nutrition labels for AI models. LIKE.TG, for example, publishes its model cards so customers and prospects can learn how the models were trained to make predictions.
Make your AI interpretable
Why does an AI system make the recommendation or prediction it does? You might shrug when AI recommends you watch a new movie, but you’ll definitely want to know how AI weighed the pros and cons of your loan application. Those explanations need to be understood by the person receiving the information — such as a lender or loan officer — who then must decide how to act upon the recommendation an AI system is making.
That said, one study conducted by researchers at IBM Research AI, Cornell University, and Georgia Institute of Technology found that even people well versed in AI systems often over-relied on and misinterpreted the system’s results. Misunderstanding how the AI systems work can result in disastrous consequences in situations that require more human attention. The bottom line? More real-life testing needs to occur with the people using the AI to ensure they understand the explanations.
Keep a human in the loop
Some regulations call for a human to make the final decision about anything with legal or similarly significant effects, such as hiring, loans, school acceptance, and criminal justice recommendations. By requiring human review rather than automating the decision, regulators expect bias and harm to be more easily caught and mitigated.
In some cases, humans may defer to AI recommendations rather than rely on their own judgment. Combine this tendency with the difficulty of grasping the rationale behind an AI decision, humans don’t actually provide that safety mechanism against bias.
That doesn’t mean high-risk, critical decisions should simply be automated. It means we need to ensure that people can interpret AI explanations and are incentivised to flag a decision for bias or harm. For instance, the AI governance framework in Singapore, along with its personal protection data policy, allows AI to make recommendations for decisions with legal impact (for example, a loan approval or rejection) but requires a human to make the final decision. This not only promotes AI adoption, but also builds customer confidence and trust.
How to add ethical AI practices to your business
It’s understandable, then, that some executives may be reluctant to collect sensitive data — age, race, and gender, for example — to begin with. Some worry about inadvertently biasing their models or being unable to properly comply with privacy regulations. However, you cannot achieve fairness through inaction. Companies need to collect this data in order to analyse if there is disparate impact for different subpopulations. This sensitive data can be stored in a separate system where ethicists or auditors can access it for the purpose of bias and fairness analysis.
In a perfect world, AI systems would be built without bias, but this is not a perfect world. While some guardrails can minimise the effects of AI bias on society, no dataset or model can ever be truly bias free. Statistically speaking, bias means “error,” so a bias-free model must make perfect predictions or recommendations every time — and that just isn’t possible.
There is a lot to understand to ensure companies are creating and implementing AI responsibly and in compliance with regulations. This requires business leaders to build ethical AI practices.
Executives can start by hiring a diverse group of experts from many backgrounds in ethics, psychology, ethnography, critical race theory, and computer science. They can build on that by creating a company culture that rewards employees for flagging risks — empowering them to not only ask, “Can we do this?” but also, “Should we do this?” — and by implementing consequences when harms are ignored.
Prepare for the future of AI. Find out how.
DOWNLOAD REPORT
This post originally appeared on the U.S.-version of the LIKE.TG blog.

LIKE.TG 专注全球社交流量推广,致力于为全球出海企业提供有关的私域营销获客、国际电商、全球客服、金融支持等最新资讯和实用工具。免费领取【WhatsApp、LINE、Telegram、Twitter、ZALO】等云控系统试用;点击【联系客服】 ,或关注【LIKE.TG出海指南频道】、【LIKE.TG生态链-全球资源互联社区】了解更多最新资讯
本文由LIKE.TG编辑部转载自互联网并编辑,如有侵权影响,请联系官方客服,将为您妥善处理。
This article is republished from public internet and edited by the LIKE.TG editorial department. If there is any infringement, please contact our official customer service for proper handling.
效率工具客服坐席客服系统坐席多开