👋 Hello and welcome back to Cybersecurity Decrypted, your weekly recap of the latest cybersecurity headlines from Expert Insights. Each week, we bring you the latest news so that you can stay ahead in cybersecurity.
Prefer to get your news on-the-go? You can listen to this briefing on the Decrypted Podcast.
This week, the US Senate struck a ban on state regulation of AI from President Trump’s tax-cut and spending bill. The original proposal would have blocked individual states from creating any laws to regulate AI for the next 10 years, while a later compromise would have shortened this period to 5 years and allowed a handful of exceptions.
During the debate around the “big, beautiful bill”, many AI companies argued that it is difficult for them to comply with every state’s individual rules. However, it was ultimately decided that states should be allowed to protect their constituents against threats such as deepfakes, unsafe autonomous vehicles, and misinformation instead of letting the AI industry go completely unchecked.
While the moratorium was scrapped, it’s pulled a heated debate into the limelight: should the development, dissemination, and use of AI be regulated?
“The data scientist in me says, ‘Yes, there needs to be firm guidance to guide people to do this responsibly,’” Darktrace’s SVP of Security and AI Strategy Nicole Carignan tells Expert Insights. “I’m not at all scared about AI. As a data scientist, I am scared of stupid people innovating with AI without thinking through the ethical and security implications.
“But as an innovator, you have to be able to run fast. And with this great innovation, we can achieve some really cool, almost miraculous things. So, can we innovate quickly with good data science principles to do it safely, responsibly, ethically, and securely? I think we can.”
Perhaps as Nicole says, the answer lies in presenting innovators with guidance, rather than strict regulation. Guidance that enables companies to innovate at scale, whilst encouraging them to focus on their ethical and security intentions, rather than checking specific boxes for compliance.
As for who writes that guidance? That remains to be seen.
Industry news, including funding, acquisitions and new product releases to watch this week.
Threats and APTs
Government and Policy
The Danish government has announced a new initiative to combat the creation and dissemination of deepfakes and put a stop to online misinformation. As part of the initiative, Denmark is working on changing copyright law to give individuals the property rights over their own image, facial features, and voice.
If approved, the change in law will enable Danish citizens to demand that online platforms remove deepfakes of themselves shared without consent. It will also enable artists to demand the removal of “realistic, digitally generated imitations” of their performances shared without consent.
With the aim of preserving freedom of expression, parodies and satire will still be allowed, though the criteria for defining what content falls under exempted categories is yet to be clearly defined.
If platforms don’t comply with the new legislation, they could be subject to “severe fines”, says Culture Minister Jakob Engel-Schmidt.
The announcement comes just a few months after the US signed into law the TAKE IT DOWN Act, which bans “the nonconsensual online publication of intimate visual depictions of individuals, both authentic and computer-generated, and requires certain online platforms to promptly remove such depictions upon receiving notice of their existence.”
Following our deep dive into deepfakes in last week’s issue, we think Denmark’s plans are a great step towards tackling the deepfake dilemma. But will the rest of the world follow suit?
Join thousands of IT leaders reading Decrypted every week.