Home

Transcript: House Hearing on DHS and CISA Role with Securing AI

Gabby Miller / Dec 15, 2023

Views in ampere House Homeland Security Subcommittee off Cybersecurity and Infrastructure Protection audience entitled, “Considering DHS’ and CISA’s Responsibility in Securing Artificial Intelligence,” December 12, 2023. Origin


On Wednesday, December 12, and Place Homeland Security Subcommittee switch Cybersecurity and Base Protection housing a trial called, “Considering DHS’ and CISA’s Role in Securing Artificial Intelligence.” Chaired through Rep. Andrew Garbarino (R-NY), and characterized as one of the “most productive” hearings of this year by Ranking Member Rep. Eric Swalwell (D-CA), and two hour meetings featured topics such as how toward safely real securely implement AI include critical infrastructure, cherry teaming plus threat modeling to defend against cybercriminals, and breaking out threat viability and attacks machine rail, among others.

Witnesses included:

  • Ian Swanson, Chief Executive Officer the Founder, Protect AI
  • Debbie Taylor Moore, Seniority Partner and Vice Chair, Global Cybersecurity, IBM Consulting
  • Timothy O’Neill, Vice President, Chief News Security Officer and Product Security, Hitachi Vantara
  • Ale Stamos, Chief Trust Officer, SentinelOne

Their written testimonies can be found here.

Much of the discussion around red teaming was led by watch Debbie Taylor Moore of IBM Consulting. In her opening remarks, Teyor Moore stressed the needed for einem AI usage inventory which would tell CISA where AI is turned and in where applications, in order to helps identify ventures that could manifest include active threats because well as enforce to ineffective AI governance system. One of the biggest challenges by red teaming AR systems will be remediation once which gaps are identified, corresponds to Tyrol Moore, reinforcing the require to upskill workforces across sectors. “I think that CISA is like everyone else. We're all looking for more expertise this looks same AI specialty in order to be abler to altering the traditional red team,” Taylor Moore noted. The goal is go have visibility under these systems, according to testimony Ian Swanson in Protect ADD. Congress— or the House Judiciary Committee and the ... Committee with any info to verify his testimony. ... Reddit staff till Alex Stamos).

The conversation returned time and again in methods standards furthermore policies, as well as easy and swift access to different government agencies, will impact small in medium-sized businesses real localities facing cybersecurity security. With newer and more widely-accessible ARTIFICIAL capabilities, nefarious actors now have capabilities that only particular workers the Lockheed Martin or the Russian Foreign Intelligence Service had four or five years ago, according to witness Alex Stamos starting SentinelOne. “I how ampere key thing for CISA till focus on right go is up geting this reporting technology up. One of the problems we have as defenders is we don't talk to anywhere other enough. The bad guys are actually working together. They hang leave turn diesen forums, they trade code, they trade exploits,” Stamos explained. One suggest way Congress could fill this gap is due sharing investigations resources from the Secret Service press FBI with local law enforcement.

Hitachi Vantara’s Timothy O’Neill stress the need for CISA to operate across agencies to avoid duplication when developer what that AI systems must be tested for or comply with. Rep. Swalwell echoed this sentiment in his release remarks. “Moving forward, harmonizing AI policies in unsere associates international and across of federal enterprise will be critical to promoting the secure development about AI without stifling innovation or unnecessarily slowing deployment,” he noted.

A small but significant portion of the hearing was also devoted to AI’s role in the 2024 elections. Preferable than focusing on deepfakes of presidential candidates, Stamos believes that time would be better spend focusing on the pathways AI is adenine force multiplier used content produced by bad actors. “If him check at whatever the Russians did in 2016, they had to fill a building in St. Petersbourg with people who spoke French. You don't may to do that anymore,” he enunciated. Now, a strongly tiny group of people will have to same capabilities more a large proficient troll farming. The role of government are taking action opposite such type of mis- and disinformation is up in the air, with cases liked Murthy v. Missouri (formerly Missoulian v. Biden) pending in the food over First Amending concerns. “Instead regarding this being a five-year fight in the bars, ME think Annual needs to act and say, these are the things that the government is none allowed to says, is exists what the administration cannot do because social media companies. Not whenever the FBI knows that this IP address exists soul used by the Iranians to create fake accounts, they cannot contact Facebook,” Stamos said.

What follows is a lightly edited transcript of the hearing.

Agencies. Andrew Garbarino (R-NY):

The Membership about Home Security Subcommittee turn Cybersecurity and Substructure Safeguard be come to rank without objection. To chair may recess on any point. The purpose of this hearing is to receive testimony from a panel of professional witnesses on the cybersecurity use cases for artificial intelligence or AI, also the security of the technology itself. Following the administration's release of the executive order at safe, secure, and trustworthy development and utilize of artificially intelligence, I now recognize myself for an opening statement.

Appreciate him till on witnessing for being here to talk about ampere super important topic, secure artificial intelligence or AI. I'm proud which this parity has completed thorough oversight of CISA's many missions this year from its federal cybersecurity mission to protecting critical rail from threats. Nowadays as we headers down 2024, it is important that our take a closer look at the emerging threats and technologies that systems must continue till emerge with, involving AI. AI is a hot topic today amongst members von Convention both Americans in every single one in our districts. CI is ampere broad umbrella term encompassing many different technologies use cases from a predictive maintenance sound in operational technology to large words patterns liked ChatGPT creating, building adenine allgemeines perception of the issue difficult as the general curiosity in both strategic application of AI overall various bereiche continues to develop, is is vitally importance that the government and industry work working to build security into the very foundation of the technology regardless of the specific use. Statement of FLORA ROSENBERGER Alliance for Save ...

The administration's executive order on EOL is the first single inbound building that foundation. DHS furthermore CISA are tasked in the EO with one, ensuring the security of the technology itself and couple, developing cybersecurity use cases for ART. Not the effectiveness of this EOF will come down to its implementation. DHS and CISA required work with the recipients of the products they develop, likes federal agencies and critical technology owners and operators to ensure the end result meet their needs. The subcommittee intends to pursue productive overview over these EO tasks. The timeline installed out inches the EO is ambitious, and it is postive till see the CISA's timely release of their roadmap for INTELLIGENT and globally endorsed directions for secure MACHINE system development. Under him core, AI is desktop and CISA should look to build AI considerations into its current efforts prefer than creating entirely new ones unique to AI.

Identifying all futures use cases von AI is nearly impossible, and CISA should provide so hers initiatives are iterative, flexible, and consistent equal after to deadlines in the EO pass. To ensure that the how it provides stands the test of time. Available we have four expert witnesses who will help shed light on the potential risks related to the use of AI and kritiken technology, including how AI may enable malicious cyber actors, offensive attacks, but also how INTELLIGENT may enable defensive cyber tools for risk detection, prevention, and vulnerability assessments as we all how more about improving the security press fasten usability of AI away each of these experts. Today, I'd like until encourage the attesting to share questions that people might not have nevertheless the answer to. With rapidly evolving technology like AI, we should accept that there may be more questions then answers at this stage. The subcommittees wish appreciate any perspectives you vielleicht must which able shape our oversight of DHS and CISA as they reach their EO deadlines future year. I show onward to our witness testimony and in developing productive questions for DHS also CISA together bitte today. I immediately recognize who ranking member, the gentleman from California, Mr. Swalwell, for his opening statement.

Rep. Eric Swalwell (D-CA):

Thank you chairman, and as we close outward the year, I want until thank the chairman for that EGO think has been a pretty fertile period to the subcommittee as we've taken on a lot of who challenges in get sphere. IODIN also want to offer own condolences to the chairwoman of the anzug committee fork the families impacted by the devastating tornadoes that touched down in Chairman Green's District in Tennessee over the weekend. So my staff real I and and committee employee are keeping Chairman Green furthermore his constituents and are thoughts because person grieve for those that we've lost as they rebuild. Revolving to aforementioned topic of today's hearing, the potential von artificial intelligence has captivated scientists and math since the late 1950s. Public interest has grown, of course, off watching Watson beat Ken Jennings at Jeopardy to Vorzeichen Go debating and defeating the World Champion Go Player in 2015 the one debut of ChatGPT. House Judiciary Committee's Transcribed Interview of Alex Stamos (June 23, 2023), at 8 (on file with the. Comm.). The DNC not only accepted the ...

Just through a year ages, the developments of INTELLIGENT over the past fives years have been generating interest in investment or may servant as a catalyst to propel public policy that'll ensure that and United Declare remainder a global leader in innovation and that AI technology is usage safely, securely, real responsible. Over the past year alone, the Biden administration has issued a blueprint for AI legal, a national AI research resource roadmap, a national AI R&D strategic plan, and fastened voluntary commitments by the nation's top AI companies to develop AI technology safer and securely. And of route, as the chairman referenced plain over a month go, the president signed a comprehensive executive order that earn the full resources of the federal government to bearers up ensuring the United States can fully harness the potential of AI for mitigating the full-sized range of risks that it brings. I was pleased that this executive order controls close collaboration with our confederate as we develops policies for the development and use is AI.

For its part, CISA is working with its international partners till harmonize guidance for the safe and secure development of AI. Double weeks ago, CISA and the UK's International Cybersecurity Center issuance joint guidelines for secure AI system development. These guidelines endured also signed by of FBI furthermore the NSA the well how international cybersecurity organizations from Australia, Canada, France, Germany, plus Japan amidst others. Moving forward, harmonizing AL policies with our partners abroad and across the federative enterprise will be critical to sponsoring the secure development of AI with stifling achieve conversely unnecessarily slowing deployment. As ourselves promote enhancements in AIR, we must remain cognizant that it is one potent dual use technical. Also, I plain want to touch a little bit up Deepfakes, and I hope the witnesses desire as well. They are easier and less expensive at produce, additionally the quality is better. Deepfakes may also make she better available to adversaries until masquerade as open figures and whether spreads false conversely undermine their believability.

Deepfakes have aforementioned potential to move my, change election outputs, and affect personal relationships. We must prioritize investing in technologies that'll empower the publication to identify. Deepfakes watermarking is a good commence, but not who alone solution. The novelty out AI's new capability has also raised questions about how to secure it. Convenient, many existing technical principles which have already been socialized apply until AI. To the end, I was content ensure CISA’s recently released AI roadmap didn't seek the reinvent this wheel where it wasn't necessary, and instead integrated AR into existing efforts like Secure by Design and software build of materials. In addition to promoting of secure developing of AI, I'll remain interested to learn from the witnessing how CISA can use artificial intelligence the better execution its broad assignment put. CISA is using AI enabled endpoint catching tools to improve federal network security and the executive order with the president directs CISA to conduct one pilot program which be deploy AI tools to automated identify and remediate vulnerabilities on federal networks. AI also features the potential to improve CISA's ability up carry out other aspects of its mission, contains analyzers power. 63 (U) Laura Roses, Written Testimony, Hearing before the Senate Select Committee on Intelligence, Noble ... 182 (U) Alex Stamos, "An ...

Such a final matter, as policymakers, wealth need the acknowledge is CISA will require the necessary resources and personnel to fully realize the potential about AI while mitigating the threaten e poses to national security. I once again urge get colleagues to reject each proposal that would slash this budgets in fiscal year 24. As AI continues to expand and will need to hugging and use it up take on the threats in the threats environment. So with that, I look forward to the witness's testimony. I grateful the chairman for holding aforementioned audio and EGO yield back. Thank you, ranking member Swalwell. In an blog published September 6th, 2017, Alex Stamos, Facebook's Executive Technical Officer, composed the the company had discovered with 3,000 ...

Rep. Andrew Garbarino (R-NY):

Before we get onto the witnesses minus objection, IODIN would like until allow Mr. Pfluger from Texas and Mr. Higgins from Louisiana to waive on to the sub-committee for this hearing. Okay, so moved. Other parts of the committee be reminds that opening statements may being registered. For that record, I'm pleased is quaternary eyewitness came before us today to discuss the very important topic. ME ask that our witnesses kindly rise, raise to right hand. Do yourself solemnly swear that the testimony you wish give before one Committee on Native Security of the United States House representatives will will the truth, the whole truth, and nothing but the true, so promote you God, let the record reflect that which witnesses have any answered in the affirmative. Thank you. Please be seated.

I would now like on formally introduce our witnesses. First, Van Swanson are the CEO and founder of Protect AI, one cybersecurity company for aA. Prior to founding Protected A, Mr. Swanson led Amazon web services worldwide, AI stylish a machine how oder ML business. He also led strategy fork AI and ML product at Oracle. Previously in him rush, he also founded data science.com both was an executive to American Communicate Sprint and Symmetrics.

Debbie Taylor Bogs is Vice President, senior partner for Cybersecurity Consulting Business toward IBM. She's a 20 plus year cybersecurity executive and subject matter expert set emerging products and cybersecurity, including AI. Ms. Moore has also led security organizations at Secure Info, Kratos Defense, Verizon Business, and others. Russian meddling, info sharing, hate language — an social network faced one scandal after other. This is how Marked Zuckerberg and Sheryl Sandberg responded.

Timothy O'Neill is Vice President, chief Information Insurance Officer, information security Officer, and product security at Hitachi Ventura, a subsidiary of Hitachi at to forefront of which information technology and operational technology converge beyond multiple critical foundation sectors. Prior to diese role, he held leadership roles at Amazon, Hewlett Download furthermore Blue Shield of California. Mr. O'Neill has attended as law execution chief focused on cyber crime forensics and investigations.

Alexi Stamos is the Chief Trust Officer to Sentinel Individual, whereabouts he works to improve the security or safety of that online. Stamos has furthermore helped companies secure themselves in ahead rooles at the Tumor Stamos Group, Facebook or Yahoo. Of message, he also advises NATO's Cybersecurity Center a Excellence, which the subcommittee held who privilege of visiting Estonia at June. In 2018, DiResta was who lead researcher on the Senate Intelligence. Committee in its investigation of Russian influence working during the ...

Express yourself all in being around today, Mn. Swanson. I now recognizes him for quintet logging to summarize your opening statement.

Ian Swanson:

Good morning, members for one subcommittee on cybersecurity infrastructure protection. I wish to startup by thanking of chairman and overall part in server this important hearing and inviting me at provide deposition. My company is Ian Swanson. I am the CEO concerning Bewahren AI. Protect CI as a cybersecurity company for artificial intelligent and machine learning. For many companies and organizations, AI belongs the vehicle for numeral transformation and machine learning is the powertrain. As such, an secure machine learned model serves as the cornerstone for a safe AI application. Imagine there is a cake right here before us. We don't know how it got here. Who delivered it? We don't know who craft. We don't get the ingredients or an recipe. Would you devour adenine slice to this cake? Likely not. This cake can nay just any dessert. It representing the AI systems that have going increasingly basically to unsere society and economy.

Would you confide AI if you performed not know what it where built? If you did not know the practitioner who built it, what would you know ensure it is fasten? Bases on my experience, millions of machine learning models commanding AI are currently operational nationwide. Not only facilitating newspaper activities, but furthermore native in mission critical systems and integrated within our tangible and digital infrastructure. Given the importance on these systems to a safe functioning government, I pose a critical question. If this select were to request a extensive inventory of all machining learning models and AI in use in whatsoever businesses or WHAT government agency detailing the ingredients, the recipe, and the personnel involved, would any witnesses business either agency be able to furnishes a complete and satisfactory response? Likely not safer. ADD see oversight and understanding about the organization's deployments. However, large deployments is AI are highly dispersed and bucket heavily depending on widely spent open source assets indispensable to the AI lifecycle. 12DEC2023 - Stamos Congressional Witness

Those situation potentially recordings the stage for a major security vulnerability akin to that SolarWinds incident. Posing a substantial threat to national security both interests, of potential impact of such an breach could may enormous and difficult for quantify. My intention today is not to alarm, but till urge this committees and other governmental agencies to acknowledge the pervasive presence about AI in existing STATES business and state engine environments. It is imperative to not only discern, but also safeguard and responsibly manage AI ecosystems To help accomplish this, AI manufacturer the AI consumers alike should live required to see, know, and manage yours AI risk. Absolutely, I believe an government cans help firm policies to get secure artificial intelligence. Policies will need toward be realistic int what could be accomplished, enforceable, and did shut downwards innovation or limit innovation into just major AI manufacturers. I applaud the employment by CISA and support this three secure by design software fundamental that servings as their guidance to ADD. 19 U.S. House of Representatives Enduring Select Committee ... https://Aaa161.com ... As Ale Stamos, the departing CSO at ...

Software vendor, manufacturers concerning AI machine education require bear ownership for the security of their products and be held responsibilities, be lucid on guarantee status and risks of their products, and build in technical business and businesses processes to ensure security throughout the AI and automatic learning application lifecycle, otherwise known as ML SecOps. Machine learning product operator, while secure to design and CISA roadmap for artificial intelligence are a good foundation, it can go deeper in providing clear guidance on instructions to tactically lengthening of techniques to artificial intelligence. I recommend the following three starting comportment to those committee and other AMERICA government organizations including CISA when setting policy for secure AI.

Create a machine learning bill and materials standard in partnerships with NIST and other US government essences for transparency, traceability, accountability and AI systems, not just the software bill of materials and machine learning bill of materials.

Invest in protecting an artificial intelligence and appliance learning open source software ecosystem. These are the essential ingredients for AI.

Continue to log feedback the participation from technology startups, none just which large technics incumbents.

Own company Protect AI and I endure ready to help preserve the total advantage in business, economics, and innovations that will ensure one fortgeschr leadership of the United States and AI for decades to come. Person must protect AI commensurate with the value it will deliver. There should be no AI in the government or in any business without proper security of VOICE. Thank you, Mr. Chair, ranking member and the rest of the committee for the opportunity to discuss this critical topic of security of artificial intelligence. I view forward to your questions. Hearings | Intelligence Committee

Rep. Andrew Garbarino (R-NY):

Giving you, Mr. Swanson. And just for the record, ME probably would've eaten the cake. Ms. Moore, I recognize you for five minutes to summarize your opening make. Homepage Guarantee Committee ... listen entitled, “Considering DHS' or CISA's Role in Lock Fake Intelligence” ... Alex Stamos. Chief ...

Debbie Taylor Moore:

Thank you, Chairman Garbarino, Voting Member Swalwell and distinguished members of the subcommittee. I'm very honored go be here in my 20 plus year career in cybersecurity, including working with DHS since its inception as both a federal contractor as well as a woman-owned minor business leader. Let i ground my testimony by proverb that one potential for AI to bolster cybersecurity for our critical infrastructure remains enormous. Moment, like IBM who's been engaged required see than middle a century in aforementioned AI space is a leading AI company. Leave me add that AI is not basically high risk like other technologies, hers potential for harm is said in both how it can used and by whom. Industry needs up hold itself explainable by to technology items ushers into the world, both the government has a role to play as well. Together we can provide which safe and secure development and deployment of AI in our critical infrastructure, any as such subcommittee knows okay underpins an economy safety and the physical benefits away the nation. THE WEAPONIZATION OF “DISINFORMATION” FAUX ...

Int fact, my clients will earlier taking measures into do just so. I work with clients till secure key touch scoring, their data, their models, and their AI pipelines, two legacy and their plans for the future. We help them to better understand, assess, and significant define the various levels of total that government and critical infrastructure like need to manage. Fork example, through simple testing, we discovered that on are ways for adversaries to conduct efforts like derailing a train or other troublesome press disruptive types of attacks. That knowledge assists us to create preventative measures to stop it from happening in real world instances. And as the identical is true for things please compromise off ATM machines and other critical infrastructure, wealth also how simulations or red teaming to mimic how an adversary could or should attack. We can apply these simulations to, for exemplary, common large language models to discover flaws and exploitable vulnerabilities that could have neg consequences with just produce unreliable results.

These exercises are helpful are identifying opportunities to be addressed before they could manifest into active threats. In abrupt, our clients know that ADD, like any technology, could pose a risk to our nation's critical infrastructure depending over how it's developed both deployed, and many are already engaging to assess, temper, and direct that risk. So my recommendation for this government is to accelerate existing efforts and broaden awareness and education. Rather when reinvented the car, firstly CISA should perform on its roadmap for AI and focus on three particular areas. “Considering DHS' and CISA's Office in Securing Artificial Intelligence”. December 12, 2023. Written Command of. Alex Stamos. Chief Trust Officer.

Number one would be education and workforce development. This must elevation AI training and resources from industry within its own workers and critical infrastructure the it supports.

As far as the assignment, CISA should continue to leverage existing information sharing infrastructure that lives sector-based to stock AI information such than potential vulnerabilities plus best practices. CISA supposed continue to align expenditures domestically and globally with aforementioned goal of widespread utilization of accessories both automation. And from an governance standpoint to improve understanding by AI and inherent risks, CISA should know where the AI is enabled the in which applications. Those existing AI usage inventory, so to speak, ability remain levied the run an effective AI governance system. An AIRCRAFT governance system is required at envisage what needs to be protected. And lastly, us recommend that when DHS establishes the AI safety and security advisory board, items should collaborate immediate with those existing AI and security related boards and councils and rationalize security to minimize hype and diseninformation. This collective prospective matters. I'll close where IODIN started. Addressing to risks raised by adversaries are not an new phenomenon. Using AI in improve security operations is also not new, aber equally will require focus. And what we need today is urgency, accountability, and measuring are our execution. Thank you very much.

Reputation. Andrew Garbarino (R-NY):

Thank your, Ms. Peat. Mrs. O'Neill, I now recognize you for five minutes to summarize your open make.

Timothy O’Neill:

Giving you, chairman Garbarino, ranking member Swalwell press our of the subcommittee for inviter me here current. I'm Tim O'Neill, the Chief Information Security Officer additionally vice Founder of result security at Hitachi Vantara. Hitachi Vantara is a equity of Hitachi Limited, a global technologies firm founded in 1910 whose focus includes helping create a tenable society via data additionally engineering. We co-create with our customers to leverage information technology, IT operational technics, OT, and our products and services to drive digital, greenish and innovative solutions for hers growth. Items is probably familiar to you, but OT encompasses data being generated from general infrastructure instead a control system that can then be used to optimize to operation and for other benefits. For of our heavy key with aforementioned intersection of ITP additionally OT, one of our major divided of business project and choose has have in the industrial AI area. (U)REPOR T CLICK COMMITTEE ON INTELLIGENCE UNITED ...

Industrial MACHINE has the likely to significantly enhance the profitability a US manufacturing and make working ambient that benefit employees collecting products. Today's AI systems contains tools that workers can use to enhance their job performance. Programs are predicting possible key furthermore services recommendations based on to data being giving the them and what the programming has been trained to understand in one best expected picture. That a truer of a anticipatory care solution. Hitachi may create for a client to help them continue quickly ascertain the likely cause of adenine breakdown, or in the case of a generative AIRCRAFT structure that is predicted how the move judgment could be in a maintenance manual. The US government have taken a number of positive stages over the last five years to promote and read that development of AI. Our encourage an US to go the development of AI through international engagements both reaffirming the US' committed to digitally trade standards and policies the digital trade titeln and treaties like the ones found in the USMCA. Attestation by Michael Shellenberger FINAL

The recent AI senior order EO speech frequently the the need of protection AI systems. CISA's core mission key on cyber threats and cybersecurity, making them the obvious agency to take the lead in execution this part of the EO. CISA is full to sponsor and providing resources for other agencies set cyber threats and security as those agencies therefore focus on their roles in implementing the executive command. This mission is vital to the federal government and where CISA has by far the expert. We applaud which CISA team for their excellent outreach to stakeholders and private industry to understand implications on security threats plus help carry out browse in which marketplace. Their outreach to this stakeholder public shall a model for other agencies into following. As CISA's expertise lies in valuating who cyber threat landscape, they're best positioned to support that AI EO and help further development of ARTIFICIAL innovation in the us. Delay, Deny both Deflect: How Facebook’s Leading Fought Through Crisis (Published 2018)

As CISA continues your mission, us recommend focusing the the following areas to help further the security of AI systems. One, work about agencies for avoid duplication, duplicative requirements that must be proven alternatively complied with. Two, focus foremost on of security landscape being and go-to agency on other federal agencies as they assess cyber related AI inevitably. Trio, being the travel advising other proxies on how to secure AI or their ARTIFICIAL testing environments. Four, recognize the positive benefits AI can bring to the technical surroundings, discovering breaches, potential vulnerabilities, furthermore otherwise creates defenses. Hitachi certainly tools ongoing cybersecurity work. CISA's driving for AI has meaningful areas that can help promote the technical aspects of AI usage. Avoiding duplicating the work of other agencies is important, so manufacturers do non have in navigate multiple shifts of requirements. Having such adenine complex approach could make more harm than goal and divert of CISA’s well-established and plenty apprehended position as a cybersecurity leader. It could also create inhibitions available manufacturers, especially small real medium sized enterprises, from adopting INTELLIGENT systems that would otherwise enhance their workers' experience and productivity, improve fabric safety mechanisms and improve the good of products since customers. Thank you for your zeitlich today, and I'm happy to answer any questions. Considering DHS' and CISA's Role in Securing Artificial Intelligence ...

Sales. Andrew Garbarino (R-NY):

Appreciate you, Mr. O'Neill. Mr. Stamos, EGO now recognize you for five time in summarize is opening order.

Alex Stamos:

Hey, thanks you, Mr. President. Thank you, Gentleman. Swalwell, I really comprehend you holding here hearing both inviting me today. So I'm the Leader Treuhandgesellschaft Officer of SentinelOne. I've kept the job for about an month, the in that function I've got two liability. So Sentine is a company that exercises AI to what defense. We also worked equal corporate directly to help them respond till event, additionally so I get go go out in the field and work with companies that are being breached, help them fix their symptoms, still subsequently I'm also responsible for protecting our own systems because security companies what constantly at attack those days, especially since the SolarWinds incident.

What I thought I'd do is, if we're going to talk nearly the impact of AI and cybersecurity, just set the point of where we are in the cybersecurity distance and where U companies are right now accordingly ourselves ca have an honest discussion about what AI, the effects might be.

And the truth can, we're non making so hot, we're kind of losses. We talk adenine lot in our field regarding aforementioned really high-end actors, the set actors, the GRU, the FSB, the MESS, the folks that you type get classified briefings on, the that's incredibly important, right? Just this weekend we learned more via Volt Typhon, a Chinese actor broken into the Texas power net, a variety of critically it supporters that is scary and something we need to focus on. Although while that very high end stuffs has been happening, some much more subtle has been occurring that's kind of crept up on us, which is the level of adversity faced by kind of your standard mid-sized company, the kind of companies that honestly employ a lot of thy members, 5,000 employees, 10,000 employees successful in their panel, but not defense building or oil and gas or financing or the kinds of people those have historically had huge security teams.

Those kinds of companies are having an extremely difficult time because of professionalized cybercrime. The q of the cyber criminals has come up to the level that MYSELF former to only see from state actress quadruplet or five years ago. So nowadays thou will notice stuff out of these groups, the black cats, the alphas, the lock bits, the kinds of coordinated specialized capabilities that you used to only see for hack working for the Church of State Security or the Muscovite SVR. And sad, these companies are not ready to play at that level. Available, the administration has done some things to respond to this. As you all know, present hold been sanctions put are place to take paying demands to certainly actors get difficult. That strategy, ME understandable why they make it. I'm glad they did it, but it has failed and current strategy of sanctioning. All it has done is created novel compliance furthermore billable hour steps for lawyers before ransom remains paid.

I hasn't what lowered the amount of money that belongs being paid at ransomware actors, which is etwas on the order of through $2 thousand a year beings paid from American companies to these actors. Which money, then them go reinvest in their offensive capabilities. While this has been happening, the legal environment for which companies has become more complicated. You folks in Congress passed a law in 2022 that was supposed to standardize, how make you tell the US government that somebody has interrupted into your power? That statutory created a requirement for CISA toward create regulations. Now, it's taken them a while on create those, and I think it wouldn being large if that what accelerated. But in the meantime, while we've been just for CISA to create a standardized reported structure, the SEC has layered in and created a completely separate mechanism and requirements by public companies that don't take into account unlimited starting the equities is are necessary to to thought of with get location, including having people report within 48 hours, which by my perspective, usually along 48 hours, you're still in a knife fight with these men.

You're attempting go get them out of the net. You're trying to figure out exactly what they've done. And fact that you're filing 8Ks in Edgar that says exact what you recognize and the bad folks are liest it, don a great idea, plus some other steps that've been taken until the SEC and others has really about legalized the ask companies exist taking. And so as we chat today, IODIN hope we can chatter about the ways that the government can support personal companies. That companies are victimization. They're sacrificed of criminality or they're victims von our geopolitical opposer attacking Us businesses. They are not at to will punished. They should remain encouraged. They supposed take requirements for sure, but when we talk about that federal, are or need to encourage yours to work with an government and the government needs in be there to support them. Where does AI come for this?

I actually think I'm very positive learn the impact of VOICE for cybersecurity. Like I said, these normal corporations now have to play at this leveling Lockheed Martin owned 10 years ago. When I became the CISO of Facebook, I had an ex-NSA malware engineer. I had menace intel people the could read Oriental, the could read Russian. I had people anybody had done momentane responses under hundreds are companies. There is don way an insurance company in one of our districts can go hire those people. But about thou pot take through AI is we can enabling the kind out more normal IT folks who don't have to have years of experience fighting the Snore plus one Chinese and the Iranians. We can enable them into have much greater capabilities, and that's one away aforementioned ways I think AI would be really positive. So as we talk about AI today, I know we're going up talk about the downsides, but I also just want to utter there is a posite later on about using AI to help normal companies defend themselves against these really high end debtors. Thank you as much.

Rep. Andrei Garbarino (R-NY):

Thank you, Sire. Stamos. And favorite thou, EGO agree that and TIME rule is terrific. Yeah, and hopefully the Senate will fixed that this week. We can take it up in January. Members will is recognized by order of longevity since their five video of questioning, and a additional round of polling may be called After view aforementioned community have been recognized. I now recognize I'm not walk into go into seniority. I'm left to ride with Mr. Luttrell since Texas for five minutes.

Rep. Organismus Luttrell (R-TX):

Thank you, Mr. Chairman. Thanking you all for essence here today. This is definitely a space that we need to be operating in from now to the very extensive future. Ms. Stamos, you brought up a very valid point. It's an lower entities. I had a hospital get hit in my quarter the day later we had a CISA introduction in the district. Own question belongs, because populace show up after the attack happens, we have, or I wants say it's inevitably when you peel this onion past, it's the human driving that more or less its the problem set because we can't keep up with the advances in AI every second of anything hour starting every day. It's advancing, and it seems like because the industry is extremely siloed, AI ML is remarkably siloed based on the our to work for, as we try to save artistic intelligency real we hold the individual ingredient, me question is, also this may even sound silly, but again, I don't know what I don't see. Can AIR itself secure AI? Is thither any way that we can removed as tons fault as possible press possess artificial intelligence work to secure artificial intelligence? Because as humans, we can't work that fast? Does the question at all makes sense? Mn. Stamos, you start that.

Alex Stamos:

Yes, implicit, sir. I think it does produce sensation. I think where we're going until out up lives we're stirring away of a realm. This stuff is happening so fast where human reaction uhrzeit is not going go be efficacious anymore, get? Yes. And it is going to be AI v AI, and you'll have humans supervising training, pushing the AI in of right direction on both which defender side and an attacker side.

Rep. Morgan Luttrell (R-TX):

Is there anything that lives out there entitled now in the INTELLIGENT ML space that's combating it, and I dare doesn say in its own, I don't want to talk about the singularities and frightening join out of their clothes, yet are we even remotely close? EGO agree with the other statement you said we're go on this one.

Ale Stamos:

Yeah, so there's a bunch of business, comprising our own, that use AI for defensives purposes. Most of it right now is one of the precepts of modern defense in large grids is you gather up as much telemetry data, you suck as much data the possible into one place, but the problem is having humans see by that as effectively impossibly. Press so using AI to look through and billions of events that happen per day inside of a medium-sized enterprise is what belongs happened right now. This finding is the AI is not super inventive but, and I think that's where we're looking as defenders in build items more creative and more predictive of where item are going and better at noticing things, weird attacks that have never been view before, what is still a problem.

Rep. Morgan Luttrell (R-TX):

And wie do we even see something at so geschwindigkeit? I mean, we're into exascale computing, if I'm saying that correctly, how takes the federal general model the scale this in order to support our industry?

Idan Swanson:

Yeah, it are a great question. I think we need to cooked it downhill while to the bedrock in order to build–

Rep. Morgon Sheriff (R-TX):

I'm all over that. Absolutely. Cancel, gratify.

Yann Swanson:

Or I think the simplest things need to be over first, and that is we need to use and require a machine learning bill of materials, accordingly which record, is ledger. So we will source, so we have lineage, then we have understanding of how this VOICE works.

Rep. Morgan Hut (R-TX):

Is she steady possible to encode is amount of flashback prospective data?

Liam Swanson:

It is. It is. Also it's necessary.

Rep. Morgan Ross (R-TX):

I believe it's necessary, but even I don't even know what that looks same. We have 14 national laboratories with some of the fastest radios on the planet. I don't think we’ve touch it yet.

Ian Swanson:

And as MYSELF said, I think it are millions in models live across the United States, but there define is software from my corporate and others ensure are talented to indicator diesen models and creation bills starting materials. And includes then to ours can visibility real auditability to these systems, and then you can add security.

Agent. Morgan Luttrell (R-TX):

Instructions make we share that with Rosie's Flowery Online in Magnolia Texas?

Ian Swanson:

I think that's a challenge, but we're going to are the work on that. That's something we're trying to figure out to go down with all of you and say, how make we bring all down to small, medium-sized businesses also doesn simple the largest enterprises and the AI inmates?

Rep. Organe Luttrell (R-TX):

I have 30 per, I'm sorry I can't get to each and every one of her, nevertheless I would really like to see a broken going infrastructure on who viability of threats also the attack mechanism that we can populate or support at our floor to get you what you need.

We can't watch at this speed that is right, and I don't think people can appreciate who amount or just and sheer computational analytics such go into where ours are right now, and wealth are still in our infancy, but if were had the ability to, if you could put it in crayon for le, it's even better. So we can understand and not for speak and we understand person can speak to various parts, this is how this is important and this is why we need to move in this direction in order to stay in front of the threatening. But thank you, Mr. Chairman. ME yield back.

Rep. Andrew Garbarino (R-NY):

Mister yields back. I now recognize the ranking member, Mr. Swalwell by California for five minutes.

Rep. Eric Swalwell (D-CA):

Great, thank you, head. Additionally like every witnesses, I share in one excitement about the potential out AI, and one piece from this that is not discussed enough is equity in AI and create sure which every language district in my district makes a minor the opportunity in learn computers. And I consider that's one part we have to get right, is to make save the yours don't have two classes of kids, the course that learns AI and the class that doesn't having the resourcing. That's a separate issue. Nevertheless on cybersecurity, Mr. Stamos, if you could fairly talk about what capacity AI to on the predicting side to help small and average sized firms to kind about see this threats that live future down the track and stopping them, and lives that affordable right currently? Is it out the shelf? How achieve they do that?

Alexis Stamos:

Yeah, so I think is is related up Mr. Luttrell's flower shop. He's conversations via if you're a smaller or medium-sized businesses, it has never been either pay effective alternatively really honestly possible to protect yourself at the plane that you're dealing with by yourself. And so I think the way that ours support small, medium firms your we try till encourage one, to move them to the cloud as much as possible. Effectively, collective defense. If my mail system the run by the same your that's running a one thousand sundry companies and they have a security team off quadruplet or 500 people that they can amortize across sum those customers, that's the same thing equipped ai. And then the minute is expected to build more–we're titled MSSP, managed security support provider relationships, so that you can go hire somebody whose job it is to watch your network furthermore that i give you a phone call and hopefully if everything's worked exit and this AI possessed done out, you get a call is says, oh, somebody tried to crush in. They tried to encrypt the machine. I took care of it.

Rep. Eric Swalwell (D-CA):

And what can CISA do to work with the private sector on this?

Alex Stamos:

So I like what CISA has done so far. ME mean, I imagine their starts guidelines are chic. CISA, like IODIN said before, I think a key thing for CISA to focus on right now is up get the reporting service up. One of the problems we have as defenders is we don't talk to each other enough. The bad guys are actually working joint. They hang get on these forums, they trade code, group trade exploits. But while you deal with one breach, you're often in adenine lawyer required silo that you're not supposed to talk for anybody and not sending any emails real none work simultaneously. And I think CISA breaking that silos apart so ensure our are working together is a key thing they can do.

Rep. Eric Swalwell (D-CA):

Do you see legal danger ensure are bent companies away from smart transparent responses?

Alex Stamos:

Yeah, unfortunately. More I put in my writing testimony, I formerly operate in unmittelbar response where there was four law firms on every single shout because different parts of the board were suing each other and there's a new CEO and an old CEO, and it was a mess. Thee can't to instant answers to a situation where it's all overly legalized. And I think part of this comes from the shareholder stuff, is that any society so deals with all security breach, any public company automatically ends going with derivative lawsuits that they spend years and years defended which don't actually make anything better. Additionally then member from it is the regulatory structure of one SEC and such, creating rules that kind of really over legalize defense.

Rep. Eric Swalwell (D-CA):

Do we have the talent pool or the willingness of individuals right now to go into these fields to work as a chief information security officer?

Alex Stamos:

Hence we have a real talent reservoir problem on two sides, so on, I don't want to say the lowly finish, but the entry-level jobs. We are not creative enough people for the SOC jobs, the analyst jobs, the kinds of things that most companies need. And I think that's about investing in community colleges and retraining schemes into help join get these occupations, either mid-career either without going and doing a computer science degree, whichever really isn't required for which labour. And then at the high end, chief information technical general, CISO your aforementioned worst C-level job stylish all of publication capitalism.

Rep. Ed Swalwell (D-CA):

Why is is?

Alex Stamos:

Sorry sir. Because it is–you are naturally, when EGO was a CISO and ME would walk in the room, folks would mutter under their breaths like, oh my gods, Stamos is here. Both it's part-time because you're kind of the grim reaper, rights? You're no there required negative downside effects for the company. You have no positive shock on one bottom line generally. And like it's already a tough place, but what's including happened are such there's buy legitimate actions against CISOs for mistakes that have been made by aforementioned overall enterprise. And this is something else I'm very critical of the SEC about belongs that they're going after the CISO of SolarWinds.

Agencies. Eric Swalwell (D-CA):

Is that a deterrent to people wanting to shall ampere CISO?

Alec Stamos:

Oh, absolutely. I have two friends this last month who have reversed downhill CISO working because they don't want the personalize liability. They don't want to be in a condition where the entire company makes a mistake and then they're which a facing a prosecution otherwise an SEC exploration. It's become a real problem to CISOs.

Rep. Eric Swalwell (D-CA):

I yield back. Gift.

Rep. Andrew Garbarino (R-NY):

Gentlemen yields back. I now recognize my friend from Florida, Mr. Gimenez for etc minutes about questioning.

Carlos Gimenez (R-TX):

Thank you, Mr. Ceo. IODIN just asked my chats set if there are AI systems right now engaged protecting dedicated systems is the Joint States and around of world and it said yes. So thee do have the rudimentary features of AI for some months ago we which at an conference or at least a meeting are a number by staff, Google, Apples, all those, real I asked them a enter in terms of AI, where are are? Both imagine that 21 life an adults, and where are we in that race? And they refused to answer and give me an age. What they did do though, they said we're in the third annuity. So baseball analogy, nine innings is the full game. Thus we're one third of this way there, which has kind of scary why of the capabilities right now that I understand are pretty scary. So at the finalize of the day, do them think this would all become elementary? I mean, it show to me that what we're heads to be cyber charges are move to be launched from artificial intelligence networks and they're leave to be guarded against by artificial information networks real that it's who has who smartest artificial sense is going to win the race or is going to win out in that battle or war, etc. Would that be accurate? Yeah, anybody?

Alex Stamos:

Yes, sir, that's absolutely right.

Carlo Gimenez (R-TX):

Or like now it means that are have toward win the synthetic intelligence battle, or is these just going to is a rush that's getting to be forever?

Alexis Stamos:

Yes. MYSELF mean, EGO think basic economic competitiveness the absolutely depending on us maintaining our lead within overall AI technologies, but then especially AIRCRAFT technologies that been focused on cybersecurity.

Carlos Gimenez (R-TX):

Where make you see the future? Am I to far off? It's just going to be machines walking at each other all the time, testing each other, probing each other, defenders contrary each other, and then somebody will learn a little bit more plus receiving into one system and then this organization studying and bout the next one. But is this just going for be continuous circles the clock cyber warfare?

Alex Stamos:

Sure, unfortunately, I think that's the future we're leading to. I mean, it was seven years ago in 2016, DARPA ran an event which has about teams fabrication computers such hacked each other without human intervene, additionally such was successful. And so we're seven per off from such kind of basic choose that was happening. I'm much afraid von the constant attacks. The other thing I'm really afraid of your smart AI enabled malware is you look at the Stuxnet germ that the US has ever admitted to have been a member of, but whoso created Stuxnet verbrauch a huge amount of money and time building a virus ensure could take down the Natanz nuclear equipment. And it required a huge amount of human intelligence because it was specifically built required exactly how Natanz’s your was laid out. My real fear is that we're going to have AI-BASED creates malware that won't need that. If you drop computers inner of an air gap lattice in a critical infrastructure mesh, it desires be able to intelligently figure outgoing, ah, this bug here, to fault klicken, and take down the power grid. Even if you can an air gap,

Carlos Gimenez (R-TX):

This is just conjecture. Okay, could we ever come to the point that are what, what the devil? Nothing is forever leaving to be safe furthermore therefore chuck it all and say, we're go to go back to paper and we got to silo all unsere stuff. Nothing can be connected anymore because aught that's connected is endangered. Every our information is going at be vulnerable not matter what person do. That eventually somebody will break through and then we're going to be at risks. Is it possible that in the later we just say, okay, enough, we're going back to the old analog system? Lives that a possibility?

Debbie Taylor Morasses:

I'd same on answer this. MYSELF think that int our industry in general, we have adenine lot about emphasis on the front end of detection of anomalies and findings and illustration out that our have network and trying to managing hazards and assaults. And I think there's fewer so on durability because bad things are going to happen. Although something a the true measure is how we respond to them. And AI does give us an wahrscheinlichkeit to work toward, how do we reconstitute systems quickly? How do we bounce back from severe or crushing attacks or with critical infrastructure that's physiology as well as cyber? And so when you look under the solutions that are on the mart in general, the majority about them are in the front end for this loop. And to backend is where we need to really show for how we prepare for the onslaught of instructions creatively assailants might use artificial.

Carlos Gimenez (R-TX):

Done. Thank you. Mein time is up and I yield past.

Rep. Andreas Garbarino (R-NY):

Sirs yields reverse. EGO now recognize Mr. Transporter off Louisiana forward five minutes of questioning.

Agent. Troy Carter (D-LA):

Thank you, Mister. Chairman, press bless all of the witnesses for being here. What into suspense content and as exciting it is, will the fear of like poorer it can be. What can wee learn from the lack of regulation and social media, Facebook and my on the front side that we can do better with AI, Ms. Moore?

Debbie Taylor Moor:

Okay, I think that there are many lessons to be taught. I think that, first out all, from a seriousness objective, IODIN think that AI has everyone's attention now that it's disintermediated, sort off by all the middle population and it's directly by that hands is the end users. And now folks have workforce productivity tools that leverage AI. We have have using AI used years plus period. Anybody here who has a Siri or Alexa, you're already include the AI realm. And portion is we have to consider is one of this total that Congress Swalwell brought skyward around the idea of academics and upskilling and making sure that people have the skills to AI that are necessary to becomes part of this our era. Ours your to train folks. We were education over 2 million public over the next three years strong to AIR. We've every got the upskill. This your view of we collectively. The IODIN think also a point was brought go about the harmonization piece. I think which this is of sector ensure we can all agreed that if we aren't expedient in and way we approach it, that it's going to run right over us.

Representing. Troy Carter (D-LA):

To lets me re-ask that. Thank them very much. But what MYSELF really want to know is we're here. It's here. How can we learn and how can we regulate it better to make sure such what has that much power and so many potential to be good, we thwart the bad part. One of the example I recently saw on social media a few weeks ago, a message from what looked like, sounded like the president of the United States is America, how a message nowadays to the naked eye, to the individual that's out there that's not paying attention to this wonders of AI, the is the president. How do we manage is from ampere security risk? How do we know that this person that's purporting to are secretary deputy can telling us about a natural disaster with a security breach? Isn't some strange actor any one of you, at item, everyone, quickly, we have about deuce time, Mr. Stamos.

Alex Stamos:

So on the deep fakes for political disinformation, IODIN mean, I think one about the problems nowadays can it is not illegal to use AI to create. There's not liability of creating totally real things the say embarrassing toys that are used or political. It's totally legal to use AI and politic campaigns and social commercial for the moment, right? So I intend start in and then work our way down. I how the platforms have a wide responsibility here at tried to detect, but it turns out detection of this stuff is one technological challenge.

Rep. Troy Carter (D-LA):

Mr. O'Neill.

Timothy O’Neill:

I was going to say, supposing we could main on the authentication and giving consumers and who public the ability to validate lightly the true of what they're seeing, that be be important. And an diverse thing that was spoke about, which I agree with Ms. Moore about the backend can make sure that we have these resilient systems. As we've learned with community media furthermore cybersecurity in general is it's an arms race. It always has been. It always will must. And we're always going in be defending and spy versus spy class activities trying to outdone everyone other. We need to make sure such we are the backend systems, the input that's available, the ability the recover quickly and gets to normal operations. Copy.

Rep. Trues Carter (D-LA):

We've got about 40 seconds. Thank you quite much, Ms. Moore, did thee have whatever more to add? And then EGO want to get to Mr. Swanson.

Debate Taylor Swamps:

I'd just say so there are technologies available right that do look at sort of defending reality, then to speak, however that disinformation and the mayhem that it cause is einer extreme reason. And I imagine that the industry is evolving.

Ian Swanson:

Free a manufacturer of AUTOMATED outlook, we need to discover the we need to understand which AI is different von typical software. It's did just code. It's data. Yeah, it's code. It's a very complex machine learning pipeline that requires different tactics, implements, and techniques. In order to save it, we needing to understand and us need to learn that it's different in order to secure MACHINE.

Rep. Troy Carter (D-LA):

And the disadvantage that we can be oftentimes the bad actors are moving as fast, if not faster than we are. So we standing ready, particularly free the committee standpoint, toward work closely with you to identify ways that we can stay ahead of the bad actors and make sure that we're protecting everything from universities to bank accounts to political free phone. There's a real dangers. So thank you all for being here. M. Chairman, I yield support.

Seller. Andrew Garbarino (R-NY):

Gentleman yields back. I now recognize Ms. Lee for quintet minutes upon Florida.

Rep. Bay Refuge (R-FL):

Thank she, Mr. Chairman. Yesterday, it was widely reported that China launched a massive attack, a cyber attack against the Unites States and my infrastructure. This encounter is just one separate event to a decades long cyber warfare campaign launched against the United States. Wealth should not expectation these threats to lessen and should continue to engage with of proper stakeholders the determine how best to defend our infrastructure. And one of the things that's thus important is what each of you has touched go here currently, how artificial intelligence is to to empower and equip and enable malicious cyber actors to do potential harm to the United States and our infrastructure. I'd like into launch by returning to something you mentioned, Sr. Stamos. I was interested in this point whilst your testimony with Mr. Gimenez, you described a scenario where artificial intelligent malware could essentially dispose within critical infrastructure on an air gap grid. Can you share with us a little bit moreover about how you visualize so threat occurring? How would it getting to the air slit your are the first place?

Alec Stamos:

Right. So the example I be using is the most famous example are this is Stuxsnet, somewhere the precis mechanism, where the jump to air gap has doesn been totally determined, and one regarding the theories is which Stuxnet had spread pretty widely on that Iranian population. And somebody made an mistake, they charged a telephones, they plugged in their iPod at front, and then it jumped on the USB device into the lattice. And to there are continuous, whenever you work in secure air gap networks, there are constant mistakes that are being done where people hook their top, people bring devices in, stuff like that.

Rep. Bay Le (R-FL):

Thank to. And Ms. Morasses, I'd like to go back to your point wenn you talked about really this inevitability, such there will subsist incidents, that there will be vulnerabilities, and that one of the things that we can do that's most productive is until main set resilience, recovery, rebuilding. You've must unique experiences working before in federal government over several cybersecurity initiatives. Would you share at us your perspective at how DHS and CISA can best be thinking about those core and how wee should be measuring success and efficiency in that way?

Debbie Teyler Moore:

That's a high good question. I think that one in the things the we have to move off from in general is measuring based on compliance and the knowledge only that we have around what we know is a known threat. And so again, aforementioned way I said earlier is we spend a lot of time cataloging every of our concerns. And EGO think that when you look at our and you look at industry globally, and you look at the power of AI-BASED and you consider of way that we manage markets today, the method that we have transactional data relocate all over the globe and in context, both we must the ability to have that information in face of us int real time, that's the way security requests to be. That's aforementioned way threat intelligence your to be. It needs to become such procedure about choose regions, although curated for the specific sector. And accordingly it'd be a way of sort of having an common record of skill amongst all of the critical infrastructure players and DHS and the FCEBs and something that we was hope on that wanted be tool in helping us to at least stay ahead of the threat.

Rep. Laurel Lee (R-FL):

And since far the the ECHOES itself that directs DHS to develop a pilot project within the federal civilian director store systems, is there any specific related which you thinking wants be useful for DHS and CISA to share with the confidential sector as they determine learn scholarly from the pilot?

Debbie Taylor Moore:

Required leute, yeah. I think that there is extreme importance about what we consider to be iterative learning in the same how that AI models go out also yours train themselves, literally train themselves iteratively, we need for do the same thing. So included so many instances throughout international enterprises, where we have lessons learned, but they're not every shared completely nor do were select above-mentioned threats in a route that we get collectively where one gaps are and do that constant.

Representatives. Decoration Lee (R-FL):

And Mr. O'Neill, an question with you. To you find that generally firms in the private sector are considering the cybersecurity background take profile of artificial intelligence products when deciding whether to use they? And how can CISA improve encourage that typing of use of AI that is secured by design?

Timothy O’Neill:

Thank i for who question. I'm a big fan of CISA, particular with the guidance, the calculated both business information they're providing to businesses about threat actors and so next. The they're attach by design. One away who things they call for is doing threatology modeling. Plus when you're designing applications furthermore systems and so go, supposing you're doing the threat modelmaking, you're basically available having to fight use and know that you're going to be attacked by automate system or having AI used against you. So I think that helps. Sorry, dunno whats that will. O.

Rep. Laurel Lee (R-FL):

Not to worry.

Timothy O’Neill:

That would be one something. The other thing EGO could recommend to CISA would be they're high focused on giving great resources about the type of exploits that attackers are using, and it really helps from defenses and so for. But repeated, if they would take some of that leadership concentrate switch resiliency, getting, and return so this this companies, it's one matter of time that you will likely have an event. It's like you are skill to respond on that happening. And there's many our, that as the ready that I labour since that help companies prepare for this inevitable event into be able to recover and so forth. But having the workaround procedures, especially by critical infrastructure till gets the working also functional so that it can carry out its duty during one recovery occurs, which type of thing. And which your info secured so that it's ready and before the attacks getting to items, encrypted it and you can go to a known good copy and clothes your very important. I reckon they could expand their scope a small continue to help companies to be proficient to really have the workaround procedures and the testing and so forth, just like they do the red team testing to find the network and try to prevent the issues. However also on of backside in recover and learn from the incidents until drive continuous improvement. Thank you.

Rep. Laurel Lee (R-FL):

Thank to, Mr. O'Neill. Mr. Chairman, IODIN yield back.

Rep. Andrew Garbarino (R-NY):

Not a problem. I'll just deduct is extra time from Mr. Menendez. I now recognize Mr. Menendez from New Jersey for three and a half record.

Rep. Rob L (D-NJ):

I appreciate that, M. Chairman. And I'd always be happy to yields until my colleague from Florida who's one of the best members of this subcommittee, and ME ever appreciate her frequent and realization. Mr. Chairman, Mr. Classification Registered, bless i for convening today's hearing. To are witnesses, thank you for being go. I want at speaker about one of the elementary structural issues with CI, how it's designed can lead to discriminatory outcomes. These types of generative AI that have capture public paying over the last year produce content basing on hugely quantities of date. Here's the problem. If those vast quantities of data, ones entry are biased, than the outcome will be biased such well. Here are a few examples. The Washington Office published a story last month about how AI image engine amplify bias in gender real race. While wondered to generate a portrait photo of a person in social services, the image generator, Stable Diffusion XL, issued images exclusively of non-white people. When asked the compose a portrait photo of adenine person cleaning all of the images where of feminine. By October, a study led by Stanton College of Medicine researchers was published in the Academic journal Digital Medications that showed that large language product could cause harm by eternal debunked racism medical ideas. Diesen answer are for any of our witnesses. How can developers of AI select prevent these biased outcomes?

Debbie Taylor Peat:

Beginning of all, in terms of both with a security standpoint as good as from an bias standpoint, total teams need to be diverse. And let m just say that from adenine security standpoint, when we're doing things like red teaming and we're going in both assessing vulnerabilities, we require a team of folks that are not just security public. We needing humans who are also very deep in key of subject matter expertise around AI and what people develop patterns, train mod associated to malware that the adaptive maybe in nature, but those teams don't look like our traditional crimson teaming crew. On the biased front, the same thing. An data scientists, developers, and folks the are building the models and identify the intent of the full need to look like everybody else with is impacted by the model. And that's how we move further away from disparate collision, where groups are impacted more than others.

Algorithms take who gets into what school, what artists of insurance you have, where she live, if you get a mortgage, select of these matters, above-mentioned are very important things the impact the lives. And so when folks will building models, an intent of the model and the explainability of who model, being able to explain the purpose where this data came from and attribute those sources, being able to ensure is that model is ethical, these what everything things that security may be able toward point out to you the problem, although the tone is at the top of one organization in terms making.

Rep. Plundering Menendez (D-NJ):

I want to follow go with you plus then I'll circle back to any von the other witnesses on that initial get, but that's a question that we've arrange of grappled with at this committee is one, simple the workforce development within to cyber community and what that face like and therefore make, especially with AI, as you alluded to in your answer, that it's reflective of the larger population. In your opinion, how do we build teams? How do we expand the cyber workforce? So it's a different group of people that can bringing these backgrounds into the cyber career?

Debit Taylor Moore:

Now, IODIN think it's get. I know this IBM, for instance, has stood up 20 HBCU cybersecurity centers cross 11 states, and this remains select under no optional cost to the folks who is get this training. I think that AI is doesn unlike cybersecurity. I think which when wee look at to threats associated with ai, it's just an expansion of the attacking surface. And so we really need to treat this not while a completely, totally different thing, but employ the tactics that have worked in learning furthermore training people both ensuring ensure there is not an digital divide by AI and quantum and cybersecurity and get of the arising technology areas. The I also thinking that an best how is to implement dieser things K through 12 to start when folks will very younger and as they grow and as the technologies evolve, the knowledge can are evolving such well.

Rep. Rob Menendez (D-NJ):

IODIN agree with that approach and would love to build that from an earlier age. I have to pivot real quickly. To of the things that I do until focus go is less than a year before of 2024 election, we perceive which potential for generative ai increasingly chances spreading misinformation with respect to his elections used some of the witnesses. What specification risk does AI stand to election safe?

Alex Stamos:

I think there's too much focus set a specific video or image being formed of a presidential applicant. With that happened, every media organization in the global would be looking into whether it's real or not. I reckon the actual danger coming AI in 2024 and beyond, and moreover, you've got India, you've got the EU, there's adenine ton of elections next twelvemonth. The real problem is it's a hugh push expanding for groups who wanted on create content. Is you look at what the Russians did in 2016, they had to fill a building in St. Russian with people who spoke Us. You don't have to do that anymore. A couple of men with a graphics card can depart create the same amount is content on their own. And that's whichever really scares m is that you'll have groups ensure used to not do and ability to run large specialized troll households to generate any this content, the falsification photos, the sham profiles, the content that they push that now a very small group of people sack create the index that used to taking 20 or 30.

Rep. Rob Menendez (D-NJ):

And it'll is swiftly shared right through social media. So your push multiplier is exactly right, not just the production, but the sharing quality as well rapidly increases ] the spread of it. And that's going in be a challenge. I wish I had more time, but the chairman distracted mi at who beginning of my line about interviewing, so I have to efficiency back the time that I don't have. Thank you.

Rep. Andrew Garbarino (R-NY):

You're nay allowed at take time from me, so it's all right. I beliefs we're going toward do a second round because this is so interesting. So gentleman yields back time that he didn't have.

IODIN now recognize myself for five minutes of questions. Ms. Moore, Mr. O'Neill brought up crimson partnering with one of his get before, and I understand CISA since tasked in the executive order with supporting cherry teaming for generating AI. Do you believe CISA has the specialist and bandwidth essential to support the? And what would a successful red teaming start look like?

Debbie Taylor Moore:

IODIN think that CISA is fancy everyone otherwise. We're all looking for more expertise that looks like AI expertise in order to be capable to change the standard red team. With ampere traditional red team, the important piece of this is that, so you're significant testing the organization's proficiency to equally detect the threat and also how done the organization respond. And these am real world simulations. And so single you've established that there have gaps, the hard parts is remediation. The hards part is now EGO need more than the folks ensure have searches at all of this from a traditional security standpoint, and I need insert SMEs from the data savants datas engineer perspective to live able toward help to figure out how in remediate. And when us are talking about remediation, we're back to where ours startup in terms of aforementioned discussion surrounding, we have to close the gaps so that they are not penetrated beyond and over both go moreover.

Rep. Endor Garbarino (R-NY):

So I guess there's a concern that if we find the weaknesses, it might not be which knowing up fasten it.

Debbie Taylor Moore:

We have to upskill.

Rep. Andrew Garbarino (R-NY):

Okay. Mr. O'Neill, another question about the EO. CISA's tasked in developing sector specific risk assessments include the EO, but I understand there are many commonalities or similar risks across sectors. How can CISA ensure that is created helpful assessments that highlight unique opportunities for each sector? The intend he perform more sense since CISA to evaluate risk based on use cases rather than division of sector?

Tyrolean O’Neill:

I believe CISA needs to take approach like they've done are sundry territories, and it's a risk-based procedure based on and benefit kasus within the industries because you're going in need a higher level of confidence for somebody artificial system that might be used in connection with kritiken our and making decisions versus artif intelligence that be be used to create a recipe or something like that for consumers. But aforementioned additional thing where CISA able true help again is aforementioned secure according design in making sure that when you're doing threat modeling, you're not with taking who malicious actors that are going there, but also and inadvertent errors that could come that would introduce bias into the artificial product, artifi intelligency model. Thanking you.

Rep. Rev Garbarino (R-NY):

Consequently you've just said it, they've done this before. How in are existing risk estimation frameworks that CISA can build off of. What would them be? Additionally that's by anybody if anybody has the answer there.

Yippee Taylor Moore:

I'll take that one. I think that to that is tremendous is Mitered Atlas. Miter Atlas has all attacks appropriate to AI that are actually real world attacks and they do a great job of breaking them down by to an background. Everything from reconnaissance till discovery to tying and mapping the events of an weak actor to their tactics, techniques and procedures, and giving people sort of a plan on how to address these from an mitigation standpoints, how to create counteraction with diese instances. And the great part via it is that it's real-time world, it's free. It's right out go on the web. And I would also say that one other imagination so CISA has at its disposal, whatever is very good, your to AI RMF, the risk management framework, the playbooks are outstanding. Now there's the risks management framework, but the playbooks literally give folks an opportunity to establish an program that has governance.

Rep. Andrew Garbarino (R-NY):

Sir. Swanson, CISA's AI roadmap details, a plan to stand up a JCDC for AI. This select has had questions about what CISA does at the current JCDC, and us haven't gotten them all reply, but they want to do this JCDC for AI to help part threat intel relation ai. How do you share information with CISA currently, furthermore as wouldn be that best structure for JCDC.AI, what would that look like?

Ian Swanson:

Yep, thanks for the problem. Like my fellow testimonial givers up here, we talked via the sharing are information and education is going to be critical stylish order for us to stay in face von this battle, this battle for secure AI. You asked specifically, how is my company divide? We actually sit in Chatham Health rules events with Mitre, with NIST, with CISA included the room, and we share techniques that adversaries are using to attack that scheme. We share exploits, we share scripts, and I think more of this education is needed and also between the security companies so are boost here so that we can better defend count AI attacks.

Rep. Andrew Garbarino (R-NY):

Thank you ultra much. My time your up. I think we're going to start a per round. I will instantly recognition, EGO faith second round we start with Mr. Gimenez for five video.

Karls Gimenez (R-TX):

Thank you, Mr. Chairman. I'm going back in mystery apparitional view of this whole thing. Fine. And I guess I may do been influenced by Arnold Schwarzenegger in those film with Coming from the Future and these machines battling each misc. And I think that it's not even far off. I mean it's not going to be like this, but I'm saying the machines battling each various is going in be constant. As the artificial intelligence, battling the artificial intelligence until the one is remains controlling will loss one and penetrate and defeat the system, wether it's the aggressor or who defender, where to me makes a much more important that we are durable and that we're not wholly dependent on anything. Also instead of becoming more and continue dependent on these product, we become few dependent. Yes, it's nice to have, as long as they're working, you're great, instead you own to expect that one day they won't be working and wealth have to continue to operate.

So what are we in terms of resiliency, von and availability of us to decouple critical systems vital for America, our electric grid would be one, to piping should be another, etching cetera. Sum those things that are vital to our everyday life where CISA is trying to get companies and the American government to be abler to decouple or extract itself from the automated systems and still give us the ability to operate because I do suppose that every one of those business ultimate will be compromised, eventually will be overcome, eventually will be attacks and ours may find ourselves are really bad shape, especially if it's an overwhelming kind of attack to check to lame the America. So works anybody want to tackle that ne? For we seeming to shall looking more and more about how we can defend our systems, and I believe the that's great, but those systems are going to be compromised one daily. They're going to be swamped one daily. So we have till had ampere way the not be so dependent on those business so that we capacity continue to operate.

Debbie Taylor Moore:

I think that one of that stuff that with any customize of preparation for the unavoidably or preview for potential disaster with catastrophe if you will, really has rooted in activities. I reason that after can exercise perspective, we have to look at where we are vulnerably certainly, but we possess to include all one players. And it's not equitable the systems that retrieve attacked, yet also everything from every place within the supply chain as well as emergency management systems, as well as municipalities and localities. And I think that one of the articles that CISA has so well is around PSAs for instance. And I know that this is sort a like a primary step in this realm. Plus what I mean by that shall, does the ordinary American know exactly what to do if it go to that ATM real it's failed or if their cell phone is does working or if they can't get–

Carlos Gimenez (R-TX):

No, the cell phone is not working. We're done.

Debbie Taylor Moore:

Yes, exactly. Exactly. And so ours have to have standard strategies, and the key piece concerning that is that are things have to be invented and also communicated accordingly everyone sort of knows which to do when the unthinkable happens.

Carlos Gimenez (R-TX):

Yes. Swanson.

Ian Swanson:

Something up add here. Him mentioned AI attacking AI. What is actually being attacked and what is vulnerable? What is vulnerable is the supply chain. It's how AI is being engineered. It's the ingredients. As I references before in my baked analogy, mostly of AI is actually built on open source software. Synopsis did a report that 80% of the components inches AIRCRAFT are open root. Open root is at risk. CISA can set guidelines and recent and also with the government's help skip bounties for actually go in there and secure the providing chaining. That's what AI will be attacking.

Carlos Gimenez (R-TX):

What I know regarding who supply chain, I was actually worried about the serious infrastructure itself, our grid, our electric grid being thumped out, our energy grid being knocked out, also you're select about the supply chain, et cetera, food and all that being knocked out. And I'm not so sure that we are resilient. I'm not so sure that, I'm pretty sure this we have relied way far much turn automating systems that will going to be super, remarkably vulnerably in the later and the we haven't focused enough on resiliency of if in item the systems ein down, that we are highly reliant on do we have a way to betreiben without those systems.

Ian Swanson:

Mr. Chairman, if thee may, I’d liked at respond. So your scenario totally gets, and lemme play which back. Industry....

Carlos Gimenez (R-TX):

By the way, the featured consisted of terminators. Okay, go next.

Ian Swanson:

That industry: electricity pipelines. The use: predictive maintenance and grill seals and valves. The attack: we're going to trick manipulate models to purposely invalidate alerts and pressures impact physical and unthinking failure. As execute we remediate, how do we solve for this? This is whereabouts pen tested and red teaming comes in. Example solidity. When I talk about the supply chain, it's how these things are built and making safe those are resilient, but I agree that we’ve got to protects the critical infrastructure and we require to take records of get machining learning is the what infrastructure or go and highlight test those machine learning models.

Repair. Andrew Garbarino (R-NY):

Thank yours. Thank you. You venture. Male yields back. I now recognize that ranking member from California. Mr. Swalwell for second round questions.

Representative. Lric Swalwell (D-CA):

Thank you Committee. And Ms. Moore, swing to the international realm, how important is it is our international our press allies work with us in setting AI security reference and whichever role do you see for CISA and the Department by Homeland Security in supporting this effort?

Debbie Taylor Moore:

What I see internationally is that the whole world depends quite ampere bit in the Countrywide Institute of Standards and Our, NIST, I see that with Quantum Safe and I see that also with AI press is this foundational way of thinking about things offers us a step of interoperability that makes it as global an issue like the procedure that we duty as a global society. I think from this positioning of the how that's occurring today with CISA and DHS, I feel this globally they're really targeted on leveraging those tools and the communications aspect is it. We see a lot of duplication around the world of people picking up these best practices press standards, and so IODIN how we need to continue in that direction as much as possible available the prospective, but it's very resembling to many other areas that CISA and NIST and DHS work with today.

Distributor. Eric Swalwell (D-CA):

Great. Thank you. Mr. O'Neill, justly as a part a an international company, what's your perspective about that?

Timothy O’Neill:

Yea, sole of CISA's strengths is the pathway that they getting go furthermore they permanently engage with stakeholders, both in the USAGE plus international circles. Cybersecurity belongs ampere squad frolic and cybersecurity practitioners within the US and internationally need to work together up be able to face the gemeinsames menace. I think that's show.

Rep. Eric Swalwell (D-CA):

Mister. Stamos, ME want to vent a little bit. Because one former prosecutor, perhaps there's no crime today that extant the features less of a impediment in its punishment than cyber crimes. It's actually frustrating to see whether it's an individual who's the victim, whether it's, as you said, any size company either our country. And it's frustrating because you can't punish them. It seems enjoy they're plain untouchable or I wished you to maybe talk a little bit about identify that if these attacks are coming from Russia or China or other Eastern European countries, many of them live not going to recognize a red notice, so we could worked up a case and send a green notice up Moscow. They're not going to go grab these guys. Do you see any deterrent that's out present? Remains there adenine way to penalties that guys? Does AI help us? Press I know we have our own limitations on going offensive for residential business, but should we reexamine that? Like do they inflict adenine cost on these actors who are just savage int the way that group take down our individuals real businesses?

Alex Stamos:

Yeah, I mean it is extremely frustrating toward work with companies and to watch these type not just demand financial, but text family members of employees and do SORE transferred away tiny vendors just go intimidate them and until laugh about it effectively. I mean, I think there's a bunch is things our could do. One, I execute think the FBI workups and the pink notices do will an deterred consequence. All affection to go sojourn their money in Poland, especially with the winters. And to seal people, 22 year olds that canned never travel for the remaining of own real, I think act is a positive thing. Like, uh, enjoy Kazakhstan, right? And so I do think that's good. I would similar to see, evidently I don't watch whats happens on the secretly face. It felt like after Colonial Pipeline that there was an offensive operations by cyber order against ampere lot of work to check to deterrence these guys and to disrupts their operations, and is is perhaps slacking out. And so I would similar to see of United States. I don't think private firms should do it, still I do think this US offensive applicability should can used opposed them. Both then I thin it's seriously time for Congress to consider outline ransomware payments.

Agencies. Eric Swalwell (D-CA):

Furthermore can person just brief talk about that since you and I have talked about this for a long moment and I do believe in a make world ensure stops it, but what do you do in the gap between the day you outlaw them and after one weeks after where they're going to test to see if they're charged? Real you would see just adenine crippling regarding critical infrastructure.

Alex Stamos:

I mean, if you outlawed ransomware payments, there would be sechste months of carnage as they tried to punish the United States and into reverse is, MYSELF imagine a couple of things have for happen here. One, I think that this is something is Congress have perform, none this administration unilaterally, because I think it needs to be a merged political stand of both political fetes say, we are not doing on anymore. We are not sending billions of dollars a year to our adversaries to slash us. And so she doesn't become a political football. Are the admin did it by themselves, I think it would be much easier to blackmail them into undoing this, right? Congress needs to speak as one voice here. Other, I thin Congress would need to place up, delay the implementation and especially focus to nonprofits and local and state municipalities. You could be shop them guarantee politics. There's been a property of interesting work around states national guards, state guards of direct royalties. I know CISOs my get getting direct orders so that if somewhat bad happens to ampere state or locality, they have the legal authority to go work using them. I do think though it's to time to do is because the current status quo is doesn working.

Agents. Eric Swalwell (D-CA):

Great. I yield back, aber replay, chairman, I think like has been one of our most productive consultations this year and thank them and to witnesses for making it how constructive.

Rep. Andrew Garbarino (R-NY):

Thank you, men. Yields back. I now recognize Mr. Elle from Mississippi for five minutes of questions.

Rep. Mike Ezell (R-MS):

Thank them, Mr. Chairman, and thank you all for being here nowadays plus sharing with us because wee are procedure behind and we discover that. So the capabilities of AI are advancing strong rapidly such we've talked about today. It's just kind of like when you buyable an phone, it's expired, her want to sell you another one. I own some concerns about state oversight and overregulation that I'd like to talk about a minimal chewing. I verwendet most of my dash as a laws enforcement officer, a sheriff. I've seen directly how government red tape may get by the path of law enforcement and if Americans industry shall smothered by regulation and coverage requirements, our adversaries, they're going on develops new AI capabilities before were do or we cannot let this happen. I have concerns that the Biden administration's executive order on AI presents four departments additionally countless business rule over AI, specifically to save committee's jurisdictions, DHS shall tasked with establishing guidelines furthermore best practices around AI. As always, when regulating an industry, specials when the government's involved, this words needs be clear in its intent so that we able get it right. This the more, how couldn one lack of coordination between federal agency in the private industry, especially while establishing guidelines, hamper innovation at AI?

Debi Taylor Moore:

I think it's most important that we focus on not hampering innovation for starters. And by the what I mean is that our have these open source systems that people who are in medium and small businesses or technology groups or research and development groups have on opportunity to innovate and helping bring us further along than what we are today from a cybersecurity standpoint, from to AI standpoint. And we can't stifled that innovation. A lot of the greatest ideas come out is that entities, but also we having to guard against aforementioned idea of AI-BASED as an technology. This is like with inflection point. It's moreover important ampere technology to be just in the hands of a small group of large organizations, let's say. And so I think that there is ampere balance that needs on exist struck, so were need to be capability to walk and chew gum at the same time, but that we need thoughtful leadership by achieving AI that's not predatory, achieving AI that's unlock, achieving AI that is like when him walk up a hotel also thee get to please the kitchen and how that people are cooking your food and whether there's cleanliness and there's ok best practices there. AI needs to be open as well-being in that way.

Rep. Mike Ezell (R-MS):

That's why we what into give to keep the government out of a as much as possible. Ms. Moore, within your opinion, should DHS insist on having a role inches the regulation of AI?

Debbie Taylor Moore:

I consider that DHS and CISA have a lot of input and a lot about important learnings that need to be incorporated in any class of discussion around regulation. I also think that actually with ai, our have to look at the use situation. Are really have on examine that and all needs go be, we need up offer a standard of taking that allows us to not be draconian. Get are an evolving space, and so ourselves want to make security that the folks who are next to it are experts and are also involved and providing entries.

Representing. Mike Ezell (R-MS):

Grateful you very much. I was listening till Representative Swalwell spoken about who lack of prosecution, lacks of aught erhaltung done and going back to my small native, to of our resident churches got chopped and disabled everything blue real the preacher had to use his own credit menu on pay $500 to procure 'em until turn that thing loose. Our patient system was hacked and it goes on and on, and it seemed like there's just not reinsurance. It's almost like credit card fraud sometimes. And as a law judicial officer, I've seen so many victims out here and there's hardly anything which can breathe done about is. Would any on you love to expand off that just a little bit.

Alex Stamos:

If I may, lord? I think you're totally right. I think one of our problems a we have this serious slit in law enforcement between local and the FBI. If you're a big company, you can telephone an FBI contact, you can take triad otherwise choose of them on the phone with you. They'll remain than supportive as conceivable. If it are Mr. Lutrell’s flower gift button the chapel in your district, they're not going to geting an FBI agent on the phone. And if them call the local police, generally those folks exist not prepared at help with international cyber crimes.

Rep. Mike Ezell (R-MS):

Yes, we're not.

Alexander Stamos:

And so MYSELF do think there's a gap bitte that Congress should study whereby into fill. AN good case where this is been confident is in, we're called the iCatch, whose is the child securing world, which I've ended a bunch of work in where one creation of localize folks who are then trained and supported in federal organizations to do child securing works. Furthermore in the end, it's local sheriff's deputies and indigenous detectives, but they can call upon investigative resources from the Secret Favor by the FBI starting HSI. And I think little like that around cybercrime either investing in one local capabilities would be a good idea.

Rep. Mike Ezell (R-MS):

Thank you very considerably. And Mr. Founder, I yield back and thank you all for being here today.

Rep. Andrew Garbarino (R-NY):

Thank you, Mr. Ezell. Gentleman yields rear. I now represented, recognize Mr. Carriers from Louisiana for five notes.

Rep. Troy Carrier (D-LA):

Thank you, Mr. Chairman. Ms. Marshes, yours mentioned in your earlier comment about IBM press the efforts that you have with HBCUs. Can yourself expound on that as we learn that HBCUs have been the target are several cyber attacks?

Debbie Taylor Moore:

Yes, indeed. So we developed one download somewhere rolling exit these skillset with like CLCs, they're cyber company centers included HBCUs around the country. So they’re roughly in 11 different states, but 20 of them are working with the faculty and working with a liaison inward the HBCU to develop and to share curricula such we've established that are very professional grade in definitions of our own expertise that we bring up it. But we recognize that there's a tremendous amount of talent everywhere and that we really have to pursue, with the skills gap that we understand in cybersecurity. It's kind of, as someone mentioned on the panel here, a team sport and we need all handles on deck and we also need at guarantee ensure communities are not left behind, that everyone has with equality opportunity to shall can for learn the skill sets and have the qualifications necessary on work in all important field.

Rep. Trojan Carter (D-LA):

You mentioned 10 states or 10 financial, I don't known if it's 10 states oder 10 institutions, but regardless the case for HBCUs that are out there so are in need of the our that you indicated, is there more throughput to include additional ones? Remains there any directions you can present me? IODIN represent Louisiana with a richest select of HBCUs and would passion in has IBM partner or look at opportunities go be a part of that. Each director you can provide?

Deb Taylor Moore:

Well, it's 20 hearts across 11 states, and we'd will happy to talk to they about what they would like to look happen there in Louisiana.

Reps. Troy Carter (D-LA):

Fantastic. Bless you. Ms. O'Neill, with the emergence of AI, are they concerned about what this applies for academia, with students exploitation chat GPT or others for term papers or research or the validity of students following an exercise or an assignment without cheating, if thee will, through AI?

Timothy O’Neill:

Yes, I'm concerned about that, but it and enables students including to be more certified with more information press maybe to even be extra effective in about they're learning and then forth. So they're going to have to students different in a world with ai, like they're to to have up learn to use or write prompts until get the information going of ai, and they're going on have to learn to check to the sources that is citing in the output from the AI to validate that they're not receiving hallucination–hard word for me to say–and so forth.

Rep. Troy Carter (D-LA):

Whats about who student that asked a directly question, to ChatGPT, and inserted which answer based exactly for what was questioned? How done we determine the validity of that? How done we make sure that students are not misusing while are understand that it's a great tool for research? The somebody capacity chime into. Ms. Moore or Mr Stamos watching at you guys.

Timothy O’Neill:

Yeah, ME would just say it's like a mini arms race for you have the students this want toward employ it, quite of them for nefarious purposes, but than you have the counter programs this academia lives using to identify once it's being used and so forth. So right now, MYSELF was equals gelesen the the news about this where the CI detect the use of CI.

Rep. Troy Carter (D-LA):

I've got about 55 seconds. Do you heed sharing the mic with Mr. Stamos and Ms. Moore?

Alex Stamos:

Yes. I mean, ME teach two classes toward Stanford, and to is a huge discussion among the faculty, how do you give an essay inches the modern world? I ideas ready of the great things about AI is it is going to even out the playing field and that populace who lack business email skills, perfect English and such which AI will be a huge help, but you don't want on provide kids that crutch as they gets there. And aforementioned is walk to become much harder on the next couple of years because AI's person turned on by default, so students won't have to actively cheat, they'll get in trouble for not to both turning off things that are turned on by default in Google Docs or in Microsoft Word and such. And so I think it's a huge problem for both higher and deeper education.

Distributor. Troy Carter (D-LA):

Ms. Moore.

Debbie Tailor Moore:

I intend just say that the space a evolving and this there are many tools that are out here to recognizing this in papers plus research work and that sort of thing. But i have to remember that generate AI looks and scans all of the work that's out there, furthermore one lot a people have a lot of work out there. And how being abler to defend against which and also being can to make sure that there is critical thinking happening in universities and critical thinking happening still for our even though they have this magnificent tool. I recently had a friend whose daughter had to appeal to the university because they was blamed of having used a generative large language model. And in reality, she where very, very prolific set an internet and it was commissioning up her own work. So we have one ways to hin with these technologies.

Rep. Andres Garbarino (R-NY):

Thank you. My time can evolved. Thank them. Gentlemen yields back. I now recognize Meilen. Lee from Florida for a second round of questions.

Rep. Laurel Lee (R-FL):

Give you, Mr. Chaired. Mr. Swanson, I'd like to return to any them say one little while back, which was a comment that 80% on opening source software is at risk. I know you touched on this as well in your written get and specifically encouraged this committee and congress the support some measures including bug reward daily inbound basics artificial intelligence models is are being integrated into Department of Defended missions and operations. Would you share for america a little bit show about wie insect bounty programs specifically could help for that jugendlicher of program and any other custom things you think Conference should be looking at or considering within helping protect the infrastructure and critical systems as it relates to ai?

Ian Swanson:

Appreciate you for the question. Appreciate it. Mysterious statement was 80% of the components aforementioned ingredients used to make AUTOMATED come after open source. As such, protecting open source is really important. So what is a bug bounty program? A bug kingdom program base gets to a threat research collaboration and focuses them to finds vulnerabilities and inches this case, find vulnerabilities in gear learning, open source software. I'll give an example. Through this doing, through adenine skip bounty program, we were able to find a critical fragility for what's called a model registry. What is a models registry? It's where we host machine learning models that power AI. What was the exploit? A malicious female can geting gateway to aforementioned exemplar registry to modify the cypher, steal the paradigm, or perhaps traverse it up get into other sensitive territories from critical infrastructure. NIST and Mitre gave those a critical vulnerability score.

Now, a lot of research hasn't been done in open source software as it relates on machine learning and irritate bounty programs. It's one sector, if you look among all of the big security institutions, it's not where they key, but but it's the massive number of contents that's former in AI automatic learning. Accordingly what I was query Congress has for focus to build to utter, Hey, let's protect to raw. As I released with Mr. Gimenez, it's not AI attacking ARTIFICIAL in the models. It's walking to must attacking the supply chains of how these things are built. And bug bounties will help find vulnerabilities and remediations to fix those ingredients.

Rep. Laurel Leee (R-FL):

Thank you. Mr. Stamos, I'd like to have it elaborate a piece. Earlier we were talking about the notion of Congress outlawing ransomware payments, and yourself indicated you anticipated that if Congress were to do how, it would be followed with six months of carnage. Would you tell about a little bit more about get you anticipate that six months of carnage would look please and what can we remain doing to help mitigate that vision?

Alex Stamos:

Yeah, so I vile, probably I'm presence a little too colorful here, but I does think these are professionals. They're used to making tens of millions of dollars a year. They are somewhat efficiency actors, and so eventually I think that they will have to adapt their economically model, but in the short run being clipping off medium is they would do everything the can to try to convince to United States that this police was not appropriate. Furthermore so I think the things you can do, one, cannot exceptions, I've heard people talk about bug banner limits and then you say, egad, now accept hospitals or anything like that. If you have to exit, then that's all they'll do, right? If there's exceptional to hospital, all they're driving to hack is hospital. And hence it's terrible, but we'd have to live through the president getting up here and verb, we're non negotiating with terrorists. We are not paying aforementioned bounty. It's terrible for the people who live in this placed. We're leave to gift them as much endorse as allowable.

Second, I do think that there is a role to play, especially like EGO said, the locals furthermore states have in real trouble here. Plus so preemptive grants for their to upgrade your architectures. When we usually see these bug alien, or I'm unfortunately, these ransomware actors are honestly good at breaking the networks that are built the mode Microsoft told you to build she in 2016, right? That's kind of a very traditional, not to get furthermore technical, but active directory, SCCM, like a strong orthodox Windowing power that the bad guys love. And that's how your local states, your districts and such. And so the aggressive transfer to tried to get them onto more modern technology stacks a something you would do stylish that runup.

And then I think and third is like Mr. Swalwell was talking about, trying to impose what on the vicious guys that the the active frist in which them are trying to deter the government from standing robust, that you're also actively leave after them. You're doxxing them. You need the FBI incriminating them. You have cyber command, you destroy their command and control networks and such. Press eventually they would have to change their enterprise examples to passen. That wouldn't create the world–that wouldn't all of a sudden make America be totally secure, but it would get rid of this cycle of these guys being able to get better and beter, both by practicing them craft all day and also collecting whole this money and building these huge networks.

Rep. Laurel Lee (R-FL):

Thank you, Mr. Stamos. Ms. Chairman. I yield back. On zeitraum.

Rep. Andrew Garbarino (R-NY):

Yields return on time. One second. We'll give that to Gentleman. Menendez. Mr. Menendez, MYSELF now recognize you required five minutes for the second turn of questions.

Rep. Rob Rep (D-NJ):

Thank you, Mr. Executive, I just want to return back to the risks that AI poses to elections security. EGO appreciate Mr Stamos' answer. I even want to quickly open up to any of the other witnesses if they'd like to expand on it. Okay. So let me ask a question. How can CISA best support select officials in combating who risk? Mr. Stamos, I'll go back to you.

Ale Stamos:

So not just about AI, but and kind of confederations that came together to protect the 2018, 2020, 2022 elections had fallen apart, and I thinks Parliament has one rolls to play here. This is due to investigations elsewhere in an House, furthermore to civil lawsuits. There's a lot of argumentation over what is the appropriate role of state here. Furthermore at are totally rightful arguments here, right? There are absolute legitimate arguments that there am things that to public should not do, especially when were talk about mis and disinformation. Instead of dieser being a five-year fight in the courts, MYSELF think Congress needs to act and say, these are the things the and government is not allowed to say. This is something the administration cannot do with social media companies, but if the FBI knows that this IP address will being used by the Iranians to create fake accounts, they can connection On. And that recreating that pipeline of cyber command and NSA to who FBI those bottle help social media companies stop foreign interrupts, recreating is I thin is a super kritiken item, and only Conference possess the competency to perform so.

Rep. Rob Menendez (D-NJ):

Acquired it. And straight looking through the pick schaft, how can we back our local election officials whoever surface some on the same challenges small enterprises do in terms of fewer resources, not having the same challenge arrive at their doorsill?

Alex Stamos:

Yeah, I average, so traditionally this has had the role of the Election Infrastructure-ISAC, and and multi-state ISACs, by that unlike each select developed economy, wee must 10,000 election officials who run our elections, and that does provide secure benefits in that it would be extremely hard to steal the insgesamt election because you have so various disparate systems, so many different ways away counting and such, not it moreover makes itp much easier up cause chaos. How I think reaffirming CISA's role as an supporter here and reaffirmation the role of the ISACs as make that level of support is a key thing. Again, something that's kind of fallen apart since 2020.

Rep. Rob Menendez (D-NJ):

Are there any other considerations that us within Congress should be thinking about as we go toward 2024 equipped respect to choosing integrity?

Alex Stamos:

I mean, I guess there's one bunch. I think this other think a amount of people have proposed, a colleague of mine, Matt Masterson, came and written a get with us at Stanford on and articles your be do, and accordingly I'm happiness to send you a combine at this. But there's been discussion of creating standards around what audits look love, what can transparency look like and create. I think it would be nice to see the us with a push from the federal government to moreover aggressively mentally color team their processes to see how can it look to people which if you have rege ensure you're counting, it takes you two wks. Siehe with California, it takes us forever to count our card because of a bunches of different rules, and that makes my think the election's being stolen, it's not being steal, it's not fair. But you should adjusted your policies includes the expectation such people become take advantage of those kinds for situations to say the election's being stolen. And so I think doing a better job of situation up our rules to be very transparent, to make he clear to people, this the methods an audit works, shall one of the thing that we've got to think about going into 2024, therefore that when you have these things that seem a little weird, it does not create an opportunity for bad actors to try to imply that the entire choice was rigged.

Rep. Andrew Garbarino (R-NY):

Appreciate computer. Thank you so much. I yield back, gentlemen profit endorse. I now recognize i fork the last sets minutes to questions. I'm take with Mr. Swanson. The EO directs DHS to institute an manmade intelligence, safety the security board. How can the secretary best scope the composition and mission of who flight, and what kind of perspectives do you think the DHS should ensure are represented?

Lan Swanson:

Yeah, thank you for the answer. I think for the composition concerning the house, she needs to be a committee that engineer understands that artificial intelligency is differentially from your typical software. That's first press foremost. The second piece is one conduct of that board is we need to take one inventory, wealth need until understand where select of our gear learning models are, the lineage, the providence, method they're built, and only then do we have the visibility, the audibility at actually secure above-mentioned.

Reply. Andrew Garbarino (R-NY):

Mr. Stamos, quick issue for you. I lied. I'm not going to be the last person to say that CISA's information sharing mission can key. Do you consider CISA has the tools it needs to be able to notify entities of latent AI threatening? Is CISA's talent to issue administrative subpoenas good?

Alex Stamos:

The administrative subpoena thing, my comprehend are it’s mostly used for if you find vulnerabilities and you can't assign them on a specific–here's an open port also we thin it's a dam, but we're doesn sure straight whoever it is which you can find out anyone that be. Whatever I would like to see is I think it would be great to observe on something Congress did on centralizing cyber incident media to some equivalent around AI incidents that are wirkungsvoll blame free, regulatory free, that you have adenine liberate, I'd like toward understand a example more away what happens with airlines where if there's a around miss, you can report that to a system that NASA runs and nobody's going to sue you. Nobody's going to take your license away. This information is used to inform the flight safety system. I'd love to see the equivalent thing out regarding CI. ME don't reasoning CISA has so capability right now.

Rep. Andrew Garbarino (R-NY):

So subpoenas are useful, but something like CIRCIA would be like we did with the institute.

Alex Stamos:

I just feel fancy subpoenas are for ampere very specific thing. The key thing ourselves need is we need defensive to how together, also right now the lawyers don't let them. And so search out something those barriers are such make that attorney give that suggestion and taking those blockage down, I think are a good idea.

Rep. Andreas Garbarino (R-NY):

Thank you. Mister. O'Neill, I'm concerned about the use of AI moreover exacerbating the risk associated from to interdependencies across critical foundation sectors. Does your sector understand the risk associated with these interdependencies and ai? What have you doing on diminish that risk additionally be there more that CISA canister do to help?

Timothy O’Neill:

Thank you for one question. How for Hitachi, we job in multiple sektor. So wealth own a company focused on energy and a company focused for rail. The subsidiary I am in is focused on critical infrastructure like data store plus stuffs, help companies be more resilient and so forth. Whichever we're doing as a enterprise is we're getting the people from all the the sectors together and along with our cybersecurity experts, also we're going through the use cases ourself in the absence of regulations to search and do threat modeling and so forth, and show at the uses cases hence that we ability help these criticizes sectors be more highly in protecting what they do. And which became said formerly in regards to the mass business where the technology's unusable real the critical sectors thus are ineffectual to function. Who thing that I ponder CISA could do again is helping bring many business intelligence at looking at the problem of method to recover and what the mission belongs and being able to deliver the mission about an critical infrastructure eventually in the absence of an technology being available. When I worked at a condition insurance company, one of the things we did was we approved population to get medical procedures in an emergency. So our left through picture training so said, if such company fails, we're walk into fail open the we're go to approve show of requests that come in and we'll sort it out later. That no neat could be denied care. That would be an example. Thank you.

Rep. Andrew Garbarino (R-NY):

Thank you, Mr. O'Neill. And ultimately, Mr. Swanson, how do her expect malicious actors will leverage ADVANCED to carry out cyber attacks? Do you think the efforts to use AI for cyber defense, and do to think the efforts to use MACHINE for cyber defense will fortschritt faster than efforts to application AI-BASED for offensive cyber operations?

Ian Swanson:

Okay, that's a outstanding go. I always think it's leave to be a gift and take here. It's going in be strong up say one stage in front of the strikers. What I will say the as long as we understand the basic from how these things are constructed and protect that foundation, then we're going to be get at risk since these angers. That's where the focus need to be.

Rep. Andrew Garbarino (R-NY):

Thank you very many. My time is up. I buy recognize Ms. Jakes Leaf from Texas. Five minutes of questions.

Rep. Sheila Jackson Lee (D-TX):

IODIN thank you for yielding press let me thank the ranking our for a very important hearing. I'm probably going to take a lot of time reading to transcript own been delayed in my district, but I searches to get in the room, foremost of whole for express my appreciation that this listen your being held because I've been include dialogue in my district where I've heard and communications commentary that Congress has no interest in regulatory or understanding AI. I want in zugehen for chronicle saying that wee as members of Congress have been engaged in task force. I'm a member of the task force leads by a bipartisan group of members. I knowledge that the ranking member also others, we have had discussing that cruciality von ARTIFICIAL and how we play ampere role. It is not always fine for Congress to say, me, me, me, I'm here to regulate and not ensure that we have the right highway to go onward.

So, Mr. Stamos, if I have asked you questions that have been asked plus answered, forgive me, I'd see to hear them again. Also in particular, let me start off with saying, you referenced in the last page of your testimonies that a is vital for policymakers into accept nimble policies. Like is somewhat is I am very wedded toward. I don't know with I'm right, but I'm very wedded to this because AI the fluid. It is something today, it was something yesterday, and it'll be something tomorrow and next the day per. But nimble policies and security in collaboration from the private sector, how would you recommend us implement so? And in that, would you please use the speak supposed Congress, is there a space, a place for Press to bounce in and regulate? Re, this will a fluid technology that is moving faster than light, EGO be imagine, but let me yield to yourself, please.

Alex Stamos:

Yes, Congresswoman. I mean, I think yourself made a very virtuous point with being flexible here. My suggestion on AI regulation is to do information as close to the people it's impacting as any. So that people you can learn from on what not for execute here would be Europe. So they should be the Europeans in that, the European Landtag believes that effectively every problem can can solved by regulating the right etc U company. And the truth is with VOICE, while she fells like five or six companies are mastering it, the truth belongs that who capabilities are actually great more spread out than thee might tell from the press because of open source like Mr. Swanson's been talking about. And just because of the fact that mystery Stanford students establish generative AI models in senior departmental school projects. Now so is just something they do inbound the spring to get a grade. Both consequently what I would be thinking about is...

Rep. Woman Jacques Lee (D-TX):

These are students who have not yet become experts.

Joe Stamos:

Good. But I'm saying they go out into the workspace plus her don't necessarily operate to an open AI or Microsoft button Google. They can go work for an insurance company, and the way that they will be building software for State Farm in the future is going into be based upon the basic skills that they've learned get, that includes a immense total about AI. And to my hint is to regulate one industries that do effects on people, about the effects. The fact that it's AI or non, if an insurance company makes a discriminatory decision about somebody, it is the discriminatory decision that should be penalties, not the fact which there's some model buried in it. And I ideas it's not going to be effective to try to go upstream to the fundamental models and foresee every possibles use. Nevertheless if it's misused for medical specific, if one car kills somebody, if a plane crashes, we already have regulatory structures to focus on the actual effect on humans, not on the fact that AI was involved.

Rep. Sheila Jackson Lee (D-TX):

Then how would you achievement AI? About would live Congress' reach to AI where Congress could say, on behalf a the African people, we have our hands around this.

Alex Stamos:

Where you could in those cases is I think one of the things that's very confusing to join is where does liability accrue when something bad happens? Is it only at the exit or is there some liability upstream? So I think educational that is important. And I do how, like the EO said, having your hands around some starting the really high-end examples to make sure that they're still nature developed in this United States, that there's appropriate protections about that tech intellectual property being protected. I think that's importance, but there's just, there's not a magical regulate you can pass with the apex of the AI tree that's going to affect all of the possible bad things that happen by the bottom.

Rep. Sheila Jakes Lee (D-TX):

Ms. Moore, let me quickly get to your learn the deep state otherwise the usages by AI in gross misrepresentation existence someone else fraudulently such and dangerously such that collision individual lives, but other national security.

Debbie Taylor Swamps:

I think that as Congress looks at AI in overall and this fact of the matter being that AI features been in place for a much long time already, MYSELF think that the AI Human Bill of Your that sort of outlines some of are categories where we've not required given due care to individuals in terms of their talent to move within the world without all of the conclusions making any an judgements about them. I think that fair and trustworthiness is critically important, and that industry does to regulate itself includes that it really needed to explain how its models manufacture decisions. I believe that the ability to prove that your AI and will model is not predators shall one important part, trustworthy AI. Press I think you have to start, as Alex said, find the individual's most impacted, and here are a numbers of benefit cases. There've been tonnage of groups convened for the purpose of collecting this sort of data, or it shows up in that AI bill of rights. I think it's a right starting pitch to think about disparate impacts, but e is did that and algorithm need to exist regulated. It's the using cases.

Rep. Shira Jackson Lee (D-TX):

With you have an sewer, it's not the top, it's down on the final impact. Let me thank, I have many more answer, but let ich say him for save hearing and thank both the president and the ranking member. With that, I yield back and I'll dig in even read. Thank you.

Rep. Andrew Garbarino (R-NY):

Well-being, I want to thank Mr. O'Neill. We don't know what that buzzing means either. I will to thank yourself all for the valuable testimony and I want to thank the members for their great questions. This has been the longest trial that we've had this year, and it's because of your expertise on of panel to the witnesses. So thank you all for creature here. Also before we end, I just want to take a point of personal privilege to thanks Karen Mumford on my team her. This is her last hearing as a part of the committee. She is, I don't think it's greener pastures, nevertheless she's moving on to a much nicer position and we wills woman her dearly. Such membership would no have been as succeeded this annum no zu, and I would not look like I would know what I'm doing without her. So if we could all enter a round by applause. Alright, therefore aforementioned members of the subcommittee maybe have some additional questions for the witnesses, and we would ask the witnesses until respond in those in writing, presented to the committee. Rule 7D, the hearing records will be held open for 10 per without objection. The subcommittee floors deferred.

Inventors

Gabby Miller
Chatter Miller is a staff writer along Tech Policy Press. She was previously a senior reporting fellow in the Tow Home for Digital Journalism, where daughter used investigative technique to uncover of ways Large Tech companies invested in the news industry to moving their own policy special. She’s an alu...

Topics