Zscalerのブログ
Zscalerの最新ブログ情報を受信
CXOs stress responsibility and caution with generative AI
As the shockwaves emanate from ChatGPT and the growing list of new AI tools, observing these technologies' adoption and business use cases and weighing their value is vital for leaders. While productivity, new capabilities, and business models can and will leap forward, we must also protect against possible dangers and concerns.
Data handling and privacy are hot-button topics for generative AI, placing security as a top issue for any organization eyeing it for potential implementation. Elon Musk and some other prominent AI researchers or practitioners are urging AI companies to pause on high-end model training, citing “risks to society.” Should we heed this warning, or is it overblown?
Zscaler recently joined a virtual panel with experts from Microsoft, Swisscom, and HiSolutions to discuss these issues and separate hype from facts. The Handelsblatt CXO Roundtable included many attendees that contributed sharp insights to the debate. One CIO said, “Don’t be afraid of AI. Be afraid of people that know how to use it.” But sometimes, we should also be afraid of people who do not know how to use it.

Andreas Braun, CTO at Microsoft Germany, said that his company is heavily investing in AI and running it on Azure. The infrastructure requirements of this new technology are heavy, and no one expected the number of users to go through the roof. Scaling on this level is only possible in the cloud.
Coincidentally, the company (a Zscaler partner) just launched a new tool enabled by GPT-4 tech to help workers in security operations centers keep up with attackers. Similarly, Zscaler ZChat (our experimental bot) illustrates how to cut out the complexity and time needed to get insights into an org’s threat landscape and cyber operations, among other business processes.
Read: Microsoft and Zscaler: CoPilot/ZChat and future enterprise app's platform in the Generative AI era
Gerd Niehage, CTO at Swisscom, a telecommunications provider, pointed out that for any business to be good at AI, it must have data practices for collection, cleaning, and overall quality as a core competence. Eventually, AI could be used for daily business once the data is reliably clean, and uses such as automating customer interactions could be expected.
Manuel Atug, Head of Business Development at German tech company HiSolutions, dived into the limitations of generative AI, looking critically at algorithms and asking if they are to be always considered right and never wrong. In security, automation means many false positives must be caught by people with experience, indicating the role of human intelligence is necessary to close the loop. Atug also mentioned that the computational overhead to drive AI could be bad for an already stressed environment and that AI is not a silver bullet for everything. Watch out for wrong decisions based on AI, which can be costly and painful.
Below are three key takeaways from the 90-minute session:
- Impressive progress: ChatGPT demonstrated learning capability and began a new era. Past AI winter and recent leaps in AI indicated that “there are decades where nothing happens and weeks where decades happen.”
- Limitations: You have to be aware of the limitations and possible applications. The speakers and audience agreed there are many constraints; some view ChatGPT as a playground, while others think we should not be too harsh on it. Most of the concerns brought up by CIOs are issues that exist without ChatGPT, meaning that humans share some of the same vulnerabilities.
- Responsible AI: Ethical and accountable AI is a significant challenge before trust and adoption go mainstream. The controls we need go beyond anything is done before, given that the entire Internet has become the data source and only some things on the internet are true.
There's plenty to be excited about when it comes to the AI renaissance from the technology point of view. But as with any new technology, there will always be dangers from the outside looking to exploit any weakness in the solution. Zscaler cares deeply about “Responsible AI.” While we don’t anticipate any real pause in the AI Large Language Model research and development, we must bring “Responsible AI” to the front. Lastly, business leaders must work with their CISOs to ensure their security architectures can adapt to this new world.
What to read next
How personalized ChatGPT will give superpowers to every CIO (video demo)
このブログは役に立ちましたか?
免責事項:このブログは、Zscalerが情報提供のみを目的として作成したものであり、「現状のまま」提供されています。記載された内容の正確性、完全性、信頼性については一切保証されません。Zscalerは、ブログ内の情報の誤りや欠如、またはその情報に基づいて行われるいかなる行為に関して一切の責任を負いません。また、ブログ内でリンクされているサードパーティーのWebサイトおよびリソースは、利便性のみを目的として提供されており、その内容や運用についても一切の責任を負いません。すべての内容は予告なく変更される場合があります。このブログにアクセスすることで、これらの条件に同意し、情報の確認および使用は自己責任で行うことを理解したものとみなされます。


