When OpenAI unveiled its ChatGPT program in November 2022, it made AI applications accessible to almost everyone. The chatbot made AI tangible for everyday people, not just data scientists or computer engineers.

But not everyone wants to use these tools for benign purposes. Earlier this year, a finance worker in Hong Kong transferred more than US$25mil to scammers using deepfake technology to pose as the company’s chief financial officer on a video call.

“These are things that keep me awake,” Maria Milosavljevic, group chief information security officer at ANZ Banking Group, said during a Fortune virtual conversation on Wednesday. “Unfortunately AI, which is incredibly useful and powerful, is available to both our friends and foes.”

The event, conducted in partnership with Accenture, explored the interaction between new generative AI tools and cybersecurity.

There is “a widening of the breadth and volume of attacks across the board,” said Scott Wilkie, global lead for emerging technology security at Accenture. The consulting firm has seen roughly a “doubling” of ransomware attacks and a 1,000% increase in phishing attacks over the past 12 months.

“Generative AI and new large language models are enabling more sophisticated attacks in greater volume,” he said.

Calvin Ng, director of the cybersecurity program center at the Cyber Security Agency of Singapore, acknowledged that the frequency and severity of cyberattacks had increased. Proper risk assessment and management are necessary, he explained.

“You can easily craft a phishing email, you can easily automate malware,” Ng elaborated. “You don’t need to be a cybersecurity professional; you’re able to craft malware using ChatGPT. Things are being simplified; doing evil is simplified today.”

Ng warned about the potential of “data poisoning,” where an adversary targets the training set behind an AI mode. Organizations need to think about the consequences of deploying AI that can “trawl information [from] all over and produce information freely without consulting somebody,” and put in guardrails to prevent data poisoning, he explained.

On Wednesday, Microsoft announced it had discovered a way to jailbreak a generative AI model, causing it to ignore its guardrails and generate content related to explosives, drugs, and politics.

Yet panelists also noted that AI can help, and not just hinder, cybersecurity teams.

“We have more than 10 billion data events coming in everyday. We can’t have human beings looking at every single thing, so 35% of our incident response has already been automated thanks to machine learning and AI,” Milosavljevic said.

Cybersecurity as a ‘team sport’

As attacks spread, cybersecurity is less an initiative that only concerns IT departments. On Wednesday, the panelists noted that cybersecurity is not just a companywide initiative, but an effort that includes multiple parties including national governments.

“We’ve always had an adage that cybersecurity should be a team sport,” Wilkie said. “Certainly over the last five years, I have never seen a time where collaboration has been greater or better intentioned.”

For example, some 40 countries are part of an international pact to agree not to give in to demands from ransomware attackers. Members also agree to work together to undertake research projects to build resiliency.

Part of why governments care is also because of national security and geopolitics. While technology can help a country grow its economy, it needs to do so in a “balanced environment,” Ng said.

Companies, and particularly those outside of heavily regulated industries, really need to get their “basic cyber hygiene right,” said Jennifer Tiang, regional head of cyber practice for Asia at Willis Towers Watson. She drew the comparison to home security: Having the most sophisticated camera systems means little if owners choose not to lock their doors.

“The risks facing them are very sophisticated,” she said. “They need to get the basics right, and invest where perhaps there was minimal investment.” – Fortune.com/The New York Times