Tech Giants Make AI Commitments Amid Channel Concern Over Cyberattacks

The firms declared they will also invest more funds in cybersecurity.

4 Min Read
OpenAI CEO on future of AI by tech giants
metamorworks/Shutterstock

Representatives from seven tech giants, including Google, Microsoft and OpenAI, have adopted voluntary early measures to curtail risks posed by artificial intelligence, according to a White House statement on Friday. There will be further regulations established as policy makers worldwide move quickly to create their own guidelines for AI.

The White House said it received immediate commitments from companies leading the way on AI, as they have pledged to share information between each other. They will inform governments and researchers about how they are mitigating the risks of the technology, including testing AI tools through a third party.

The firms declared they will also invest more money into cybersecurity.

The White House said it was about guaranteeing the future of AI to be in a climate of security, safety and trust.

Etay-Maor.jpg

Cato Networks’ Etay Maor

Etay Maor, senior director of security strategy at channel player Cato Networks, said his organization supports the White House’s initiative to convene leading tech companies in a collective effort to ensure artificial intelligence is used the right way.

“As a firm believer in the transformative power of AI, we recognize the importance of establishing robust commitments that ensure its ethical development and deployment. Collaboration among tech companies fosters a united front in addressing challenges and harnessing AI’s potential for societal benefit. However, this will not apply to cyber criminals and countries [that] develop AI tools for their malicious benefit.”

The commitments surrounding AI come from tech giants who are vendors – or ones that work closely with vendors – operating within the channel. What do these measures mean for partners?

Barry-Brazen.jpg

Profit Advisory Group’s Barry Brazen

Barry Brazen, co-founder of Profit Advisory Group, didn’t mince words about the commitments on Friday in a LinkedIn post.

“The White House just announced that Google and Microsoft are promising to keep AI safe. Whew! Don’t you feel sooooo much safer? Honestly, could there be a less trustworthy group right about now?” he said.

His sentiments may be backed up by other experts.

Cyberattacks Still a Concern with AI

Mike Parkin, senior technical engineer at Vulcan Cyber, said for organizations willing to follow the rules, it’s possible to put guardrails in place that will keep generative AI, for example, from delivering false information. But that won’t be the case for a threat actor who’s using it specifically for that purpose.

Parkin-Mike_Vulcan-Cyber.jpg

Vulcan Cyber’s Mike Parkin

“If a hostile nation, for example, wants to spread disinformation to influence an election, you can be sure they will throw the resources they need at the problem and they will not be bound by their target’s legal requirements,” he said. “A voluntary system can work within the scope of common usage, but it will only work for organizations that operate legitimately. I am far less concerned with one of the major players having flaws in their publicly available AI tools than I am with a hostile nation or cybercriminal organization building out their own capability and using it against the general public.”

Lack of Transparency with Tech Giants

Dave Randleman, field CISO of application security and ethical hacking at Coalfire, said one of the primary cybersecurity concerns is the potential for AI systems to become vulnerable to cyberattacks and exploitation by malicious actors.

As AI technologies become more prevalent and complex, they could be targeted by cybercriminals seeking to manipulate or compromise them for their advantage. For instance, an AI system used for critical decision-making could be manipulated to provide inaccurate results, leading to severe consequences.

Dave-Randleman.jpg

Coalfire’s Dave Randleman

“Corporate voluntary systems are just virtue signaling,” he said. “All the companies signing onto the voluntary system are ultimately competitors against their industry peers. AI offers such a productivity boost that any advantage will be taken to produce a better AI product that can be leveraged against corporate rivals. Our biggest concern is the lack of transparency into AI applications, as they are often gate kept by intellectual property patents and held as trade secrets. Some AI systems, especially those using deep learning techniques, can be opaque and challenging to interpret. The lack of transparency in how certain AI algorithms arrive at their decisions raises concerns about potential biases and potential cybersecurity risks.”

Want to contact the author directly about this story? Have ideas for a follow-up article? Email Claudia Adrien or connect with her on LinkedIn.

About the Author(s)

Claudia Adrien

Claudia Adrien is a reporter for Channel Futures where she covers breaking news. Prior to Informa, she wrote about biosecurity and infectious disease for a national publication. She holds a degree in journalism from the University of Florida and resides in Tampa.

Edward Gately

Senior News Editor, Channel Futures

As news editor, Edward Gately covers cybersecurity, new channel programs and program changes, M&A and other IT channel trends. Prior to Informa, he spent 26 years as a newspaper journalist in Texas, Louisiana and Arizona.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like