HereS a considerably expanded adn rewritten AP-style news feature based on the provided source, incorporating your specified requirements:
MIT Warns: ‘Bring Your Own AI’ Trend Poses Risks to U.S. Companies
CAMBRIDGE, Mass. – A growing number of U.S. employees are bypassing company protocols and using generative artificial intelligence (AI) tools without official approval, a practice dubbed “bring your own AI” or BYOAI, according to researchers at the Massachusetts Institute of Technology (MIT). This trend,while potentially boosting individual productivity,introduces important security and compliance risks that organizations can no longer afford to ignore.
“Make no mistake, I’m not talking theory here,” said Nick van der Meulen, a research scientist at MIT’s Center for Information systems Research, during a recent MIT Sloan Management Review webinar. “This has been happening for quite some time now.”
The temptation to use unsanctioned AI tools is particularly strong in companies that have restricted or banned publicly available AI chatbots like ChatGPT due to concerns about data security and regulatory compliance. Major corporations, including Samsung, Verizon, and J.P. Morgan chase, have already taken steps to limit the use of external AI platforms.These actions, while intended to protect sensitive data, may inadvertently drive employees to seek out option, less secure solutions.
The urgency surrounding BYOAI is increasing as AI models become more powerful and widely accessible.Data breaches in the U.S. hit record highs this year, with compromised data frequently enough stemming from unsecured applications. This adds another layer of concern, as sensitive company information coudl be inadvertently exposed through the use of unregulated AI tools.
Research by van der Meulen and fellow research scientist Barbara Wixom indicates that approximately 16% of employees in large organizations were using AI tools in 2024. They project that number to surge to 72% by 2027, encompassing both sanctioned and unsanctioned AI usage.
“What happens when sensitive data gets entered into platforms that you don’t control? When business decisions are made based on outputs that no one quite understands?” van der meulen questioned.
The MIT researchers distinguish between two primary types of generative AI implementations:
GenAI tools: These are individual productivity enhancers like ChatGPT or microsoft Copilot. They are readily available but often lack a clear return on investment (ROI) for the company.
GenAI solutions: these are company-wide deployments of AI across various processes and business units, designed to deliver tangible value to the entire enterprise. For example, some companies are utilizing AI to improve supply chain management, leading to significant cost savings
Wixom emphasized the importance of differentiating between these two types of AI, stating it “helps us tackle each differently and manage their value properly.”
Van der Meulen characterizes GenAI tools as a cost management issue, akin to managing the use of spreadsheets or word processing software. “In a way,they simply represent,for most organizations,the new cost of doing business,” he said. GenAI solutions, conversely, offer measurable improvements in efficiency or sales across different departments.
Wolters Kluwer, an IT services provider, offers a case in point. They developed a GenAI tool capable of extracting raw text from scanned images of lien documents. This has enabled banks to dramatically reduce loan processing times from weeks to days.
“That is not somthing that an individual employee at either Wolters kluwer or the bank could have done on their own with a GenAI tool,” van der Meulen explained. “It takes effort from manny stakeholders to create these solutions to integrate them into systems.”
The researchers stress that the distinction between AI as a tool and AI as a solution is crucial for determining governance. When AI is employed as a tool, the employee bears responsibility for its effective use. Conversely, when AI is implemented as a company-wide solution, the organization assumes ownership of its success.
Tips to Manage BYOAI:
A complete ban on GenAI tools is not a practical or effective response.
“Employees won’t just stop using GenAI; they’ll start looking for workarounds,” van der Meulen cautioned. “They’ll turn to personal devices, use unsanctioned accounts, hidden tools. So instead of mitigating risk, we’d have made it harder to detect and manage.”
Instead, the researchers recommend the following strategies:
- Establish clear guardrails and guidelines: Organizations must clearly define acceptable and unacceptable AI usage. This includes specifying what types of information can be entered into AI tools.According to the MIT study, onyl 30% of senior data and tech leaders say they have well-developed AI use regulations.
- Invest in training and education: Employees need what the researchers term “AI direction and evaluation skills” (AIDE skills). Practical, hands-on training is essential.
zoetis,a global animal health company,provides a model. Their data analytics unit runs hands-on AI practice sessions three times a week, each attended by more than 100 employees.
J.D. Williams,Zoetis’ chief data and analytics officer,compared the training to teaching people how to change tires – by making them change tires.
- Provide approved tools from trusted vendors: Rather of a outright ban or unrestricted access, organizations should provide a curated selection of AI tools for employees to use.
Zoetis has implemented a “GenAI app store” where employees request licensed seats. They justify their need for the app and then share their experiences, helping the company identify valuable applications while controlling costs.”It’s how you avoid paying $50 a month for Joe from Finance who … used it exactly once to write a birthday card,” van der Meulen quipped.
Wixom also suggests organizations just starting their GenAI journey create a center of excellence — it could be one single worker or a small team — to give an enterprise-wide point of view and coordinate collaborations through departments.
“It is indeed critically important to remind everyone what the end game is here,” Wixom said. “The point of AI, nonetheless of its flavour, should be to create value for our organizations and ideally value that hits our books.”
Counterargument:
Some argue that focusing on BYOAI stifles innovation and prevents employees from leveraging potentially beneficial tools. While it's true that overly restrictive policies can hinder progress, the risks associated with unsecured AI usage, including data breaches, compliance violations, and biased outputs, outweigh the benefits of an unregulated surroundings. A balanced approach – one that combines clear guidelines, comprehensive training, and access to approved tools – is essential for fostering responsible AI adoption.
FAQ: Managing ‘Bring Your Own AI’ in the Workplace
What is ‘Bring Your Own AI’ (BYOAI)? BYOAI refers to the practice of employees using generative AI tools in the workplace without explicit company approval or oversight.
what are the primary risks associated with BYOAI? Key risks include data security breaches, compliance violations, use of intellectual property; creation of biased outputs, and lack of control over AI models used. Should companies fully ban the use of AI tools by employees? A complete ban is generally not recommended. It can drive employees to find less secure workarounds and stifle innovation. A balanced approach of guidelines,training,and approved tools is more effective.
What are ‘AI direction and evaluation skills’ (AIDE skills)? AIDE skills encompass the knowledge and abilities employees need to effectively use and evaluate AI tools,including understanding their limitations and potential biases.
* What is a ‘GenAI app store’? This refers to a curated selection of approved AI tools that employees can request access to, allowing organizations to maintain control over AI usage and manage costs.