The figure in the image had six fingers on one hand. The words on the figure’s hat and vest were indecipherable gibberish. What appeared to be a turkey was wearing an orange hunting vest.
The image shared Friday by the state Department of Energy and Environmental Protection on the agency's Facebook and Instagram pages was mocked online as obviously generated by AI.
By Tuesday, the image had been removed and replaced with something more traditional, a photo that included a flesh-and-blood human being, with the requisite number of fingers.
“An earlier version of this post contained an AI-generated image,” the post said. “It is not common practice for DEEP communications to use AI-generated images. DEEP communications typically uses real photos obtained in the field, or stock images, for our social media posts wherever possible.”
The state wants its employees to use generative AI whenever possible, within a predefined framework and with appropriate guardrails. But the DEEP post is an example of how pervasive AI has become, and how difficult it is to monitor its use across the state’s many agencies and 46,000 employees.
“Employees are encouraged to use generative AI, with vigorous human oversight, to help offload menial tasks and assist with productivity,” state Department of Administrative Services spokesman Leigh Appleby said.
Experts say while the quickly evolving AI technology offers opportunities to make government more efficient, it can also create vulnerabilities in systems that maintain a state’s worth of personal data.
“I do think that we will be forced to reckon with how we build the next generation of critical thought, perhaps without all of the hands-on learning,” said Mark Raymond, chief information officer for the state of Connecticut. “There are concerns that blind reliance on the technology without addressing that other part, how we keep people learning those skills, would be problematic.”
AI, Raymond said, is “creeping in, in ways that are harder to detect and harder to raise awareness of.”
AI currently in use
Two years ago, the state legislature passed a law creating a board, or working group, to keep an eye on how AI is and how it might be used. That law also required the administrative services department to create an annual inventory of how artificial intelligence is used in state agencies, and a framework that state employees are required to follow when using it.
The image shared by DEEP, as spokesman Will Healey confirmed, was created using an online service called Canva, which didn’t always have AI functionality. That’s why it’s not on the most recent inventory.
“While Canva was not included in last year's AI inventory, several agencies do have professional licenses, and I would expect it to be included in the 2025 inventory,” Appleby said. “It is a challenge to track and manage vendors adding AI to products that are already in use.”
When it first came in, “Canva didn't have those AI capabilities,” Raymond said. “We did approve the AI use for a different agency, so it's not like we weren't aware of the product or the use. But it also clearly is something that we need to be watching.”
Many agencies are actively using AI. The Department of Insurance, for example, uses a tool called Kira, which “reviews statutory or regulatory language against forms filed by industry to ensure compliance with those statutes and regulations,” according to the 2024 inventory.
Multiple agencies use a service called Abnormal Security, which “leverages AI and machine learning to provide real-time detection and response capabilities for email-based threats.” CrowdStrike, also used by multiple agencies, “uses AI and machine learning to provide real-time detection and response to a wide range of cyber threats, such as visualizing potential attacks in real-time.”
Another service, Pyrra, “uses AI to identify potential misinformation related to CT election laws and administration on social media sites.”
The administrative services' Bureau of Information Technology Solutions is “currently evaluating several AI use cases, including pilot programs for ChatGPT and the full Copilot program,” Appleby said. “All CT.gov email users now have access to Copilot chat — that's the version that's not integrated across all Microsoft applications but provides helpful assistance in an environment that protects user information.”
Microsoft Office and Microsoft Teams also have AI functionality, so they’re on the inventory, too, as are commonly used services such as WordPress and Zoom. Five pages long now, that inventory is expected to grow considerably.
“We have hundreds and thousands of products in use in a variety of different places, and they're getting AI features introduced into them,” Raymond said. “It's not like the AI was out there when we started using them, but we cross another month and someone introduces a new feature, and now all of a sudden AI is a part of what you're doing.”
‘Vulnerabilities'
The rapidly shifting AI landscape presents some challenges, said Vahid Behzadan, who studies and teaches about AI policy at the University of New Haven.
“Figuring out the new threats that emanate from integration of AI in different tools and technologies is still in its early days,” he said. “There are guidelines on making certain types of AI systems more secure. Of course, security is never 100% guaranteed.”
Digital tools that were relatively secure are less so after they are integrated with artificial intelligence, Behzadan said: “AI systems are cyber systems, and similar to all other cyber systems, they come with their own vulnerabilities.”
Valuable information must be better protected as AI use grows, Raymond said. The AI framework, by which all state employees are required to abide, asks questions such as: How do we use this data, and where is our data going?, he said.
Everyone is in “full alignment” to say, “We as a state will use these technologies in a way that does not expose the critical data in our care or used to train AI models," he said.
“The need to operate to understand the data that you're using and where it's going and how it's protected is fundamental to our underlying approach and our policies and our pilots,” Raymond said.
Some pilot programs with ChatGPT and Microsoft guarantee data will not be shared outside of Connecticut, but that comes with a cost.
“Those kinds of characteristics come with higher class models. If it's free, and if people are using the free version, then your data is the price of entry,” Raymond said. “We do not do that in the state, and have taken efforts to educate people on the dangers of doing so.”
Behzadan listed some of the general concerns to consider when engaging with an AI model, but specifically important to government work: “Accuracy, bias, trustworthiness and responsibility. Essentially, to ensure that the AI tools used by the government for the purposes of tasks that are within the responsibility of the government body are trustworthy. They do what they should do.
“If things go wrong, they are caught, they can be mitigated, and also, somebody is held accountable for making sure that the AI system is running without any deleterious impact on people's lives,” he said.
The promise of AI
Despite some concerns, Raymond and Behzadan are both excited about what AI technology can do for the state government.
Ideas such as translation services and information accessibility are low-hanging fruit, but there are other ideas, too, “such as monitoring traffic patterns in emergency management, monitoring the state of how a storm has affected particular neighborhoods, automatically and autonomously and so on,” Behzadan said.
With Connecticut’s state government with more than 45,000 employees engaged in a variety of tasks at 40 different agencies, AI can make it easier for residents to manage their own data within state government, Raymond said.
“The use of some of these tools to allow you as an individual to operate more efficiently with our different agencies, I think will will change the nature of how people consume government services,” he said.
There are rules and laws that would keep agencies themselves from sharing data, but nothing would prevent the state from providing a tool residents could use to corral their own data.
“Then when you interact with us, say, ‘Look, here's my current stuff,’ in a really automated way,” Raymond said.
For example, a person who has been laid off and is having difficulty paying rent. The state offers services that could help, and AI could make it smoother to access them.
“Instead of you thinking about having to apply for a specific benefit, what if you were to say, ‘Dear Connecticut, I have a need right now. Tell me all the different things that I might qualify for to help me transition from an out-of-work experience to additional training to bridge a temporary financial gap,” Raymond said.
“Don't think about going to one agency and applying. Your curated data matched with what we offer from our programs, can actually lower those barriers between how we think about what we provide. That's where I think this is going," he said.
Jordan Nathaniel Fenster is a reporter with CT Insider. He's worked as a journalist covering politics, cannabis, public health, social justice and more for 25 years. Jordan's work has appeared in The New York Times and USA Today in addition to multiple regional and local newspapers. He is an award-winning reporter, podcaster and children's book author. He serves as senior enterprise reporter and lives in Stamford with his dog, cat and three daughters. He can be reached at [email protected].