Maryland Proposes AI Toy Safety Act: US State Regulation & CPSC Approach

Maryland lawmakers are considering a first-of-its-kind bill aimed at regulating the rapidly evolving world of artificial intelligence in children’s toys. The proposed Maryland Artificial Intelligence Toy Safety Act, introduced on February 12, 2026, seeks to establish a comprehensive framework for AI-enabled toys, addressing concerns about data privacy, content safety, and potential psychological impacts on young users. This move comes as federal regulators largely defer to existing safety standards, leaving a gap that states are beginning to fill.

The legislation, which amends the Maryland Consumer Protection Act, would require manufacturers to conduct rigorous child safety assessments before selling AI toys in the state. It reflects a growing national conversation about the need for oversight of AI technologies, particularly those marketed to vulnerable populations. The act’s broad scope—covering devices using machine learning, conversational AI, and behavioral modeling—signals a proactive approach to managing the risks associated with increasingly sophisticated AI-powered playthings.

Pre-Market Assessments and Penalties

Under the proposed act, manufacturers would face pre-market compliance obligations, including comprehensive child safety assessments. Toys already on the market as of July 1, 2026, would have until January 1, 2027, to undergo these evaluations. Non-compliance could result in significant penalties, including civil fines of up to $50,000 per violation and mandatory product recalls. This financial risk underscores the seriousness with which Maryland legislators are approaching the issue.

Data Privacy at the Forefront

A central component of the bill focuses on data privacy. Manufacturers would be limited to collecting the minimum amount of child user data necessary for the toy’s core functionality, and all data collected must be encrypted. The act explicitly prohibits the sale or transfer of child data to third parties, its use in training unrelated AI models, targeted advertising, or retention beyond 12 months without renewed parental consent. In the event of a data breach, manufacturers would be required to notify affected parents or guardians within 48 hours, a notably short timeframe designed to prioritize child data security.

Content Restrictions and Parental Controls

The proposed legislation also addresses the content generated by AI toys. Toys would be prohibited from generating content that is sexual, violent, emotionally manipulative, or instructional about harmful behaviors. Manufacturers would be required to incorporate content moderation tools, age-appropriate conversational filters, and an automatic “safe mode” triggered by harmful or unknown inputs. The act prohibits marketing AI toys as emotional companions, parental substitutes, or psychological counselors, acknowledging concerns about the potential for these toys to negatively impact children’s emotional development.

Oversight and Enforcement

To ensure compliance, the act establishes the Artificial Intelligence Toy Safety Review Panel within the Consumer Protection Division of the Office of the Attorney General. This panel will be responsible for reviewing manufacturer compliance, conducting independent audits of AI toys, evaluating industry safety standards, and recommending updates to the legislation. The panel’s first annual report to the General Assembly is due December 1, 2027, signaling a commitment to ongoing oversight and adaptation to evolving AI technologies.

Federal Deference and State Action

This state-level initiative comes as the Consumer Product Safety Commission (CPSC), the primary federal regulator, has indicated it lacks the authority to address the non-physical harms potentially posed by AI in toys. On February 13, 2026, Acting Chairman Peter A. Feldman stated in a letter to Senators Klobuchar, Cantwell, and Markey that the CPSC’s statutory mission is traditionally focused on physical safety risks and does not extend to evaluating mental, emotional, or psychological harm. The CPSC maintains its approach aligns with the administration’s policy of encouraging innovation while respecting agency boundaries.

This divergence in approach highlights the need for a flexible regulatory landscape. While federal regulators remain focused on traditional product safety concerns, states like Maryland are stepping in to address the unique challenges posed by AI. Manufacturers and sellers of AI-enabled toys must now navigate a potentially complex patchwork of state-level regulations.

It remains to be seen whether the Maryland Artificial Intelligence Toy Safety Act will pass, but it represents a significant step toward addressing the lack of regulatory oversight in this emerging field. The bill’s progress will likely be closely watched by other states considering similar legislation, potentially leading to a more comprehensive national framework for AI toy safety.

The debate over regulating AI in children’s products is far from over. Continued monitoring of both state and federal developments will be crucial for stakeholders in this rapidly evolving space. Share your thoughts on this important issue in the comments below.

Disclaimer: This article provides informational content and should not be considered legal advice. Consult with a qualified professional for advice tailored to your specific circumstances.

Photo of author

Sophie Lin - Technology Editor

Sophie is a tech innovator and acclaimed tech writer recognized by the Online News Association. She translates the fast-paced world of technology, AI, and digital trends into compelling stories for readers of all backgrounds.

Deka George: Nigerian Influencer Mourns Sister’s Death After Childbirth Complications

Ohio Bill: Parental Rights & Gender Identity in Child Welfare Cases

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.