- Tech Insights
Ransomware. Security threats. Hacked accounts. We’ve heard them all and, ask ourselves: will it ever stop? Am I protected enough? What am I doing to reduce my risk? Thankfully, McAfee and its Chief Data Scientist/Senior Principal Engineer, Dr. Celeste Fralick, have some answers.
Dr. Fralick has been with Intel, Intel Security, and McAfee for over 26 years now. While she has experienced McAfee’s extraordinary fast-moving pace of the security industry, it is the prudent use of Artificial Intelligence (“AI”) that ensures McAfee is a leader in global on-line protection for Consumers. McAfee’s purpose is to create solutions that free and empower everyone to confidently enjoy life online. They deliver an all-in-one security, identity protection, and privacy service that helps keep people safe—across activities, devices, and locations through a personalized, intelligent, and inclusive approach. But with over 600M connected devices, AI is a welcomed technology to prevent 375 new threats every minute and 11 ransomware hits every second – the bad actors are increasing their attacks, and AI helps to manage the “big data” at sub-second speed that comes with the cybersecurity territory.
“AI has to be managed carefully and proactively,” Dr. Fralick says. “Data quality, data governance, and data operations are just the tip of the “AI iceberg” that can fall an AI model in the field.” And she knows of what she speaks with a whopping 41 years in data science, statistics, and engineering. She has authored numerous papers and has several patents that have spanned 10 different markets and was named one of Forbes “Top 50 Women in Technology” for the US. In addition to data management that company leadership must actively support, she also believes that in-line and field operational monitors can protect not only the Consumer, but the company as well. “When you monitor bias, explainability, and other data characteristics, it increases trust in the security solution as well as the AI that supports the product.”
Bias can be intentional or unintentional favoritism towards one or more things. It can be found in data, whether it be measurement, sampling, algorithmic, or societal. For developers of AI solutions, measuring and reducing bias requires ongoing vigilance. While elegant statistical tests exist, bias can also be routinely monitored by simply analyzing volumes, distributions, skewness, and labels. Complex trade-offs exist with bias, so it is imperative that AI developers understand its implications to overall AI trustworthiness and the Consumer.
"I love data so much that I bring statistical charts to my doctor appointments – they’re not too happy!"
Explainability is also an integral part of developing AI trustworthiness. Deep Learning (many layers of neural networks) can be difficult to explain how a model arrived at its results. This can cause issues, particularly with privacy laws that require models to justify how they reached their conclusions. Enter Explainability tools, that describe the direction and magnitude of a particular contributing feature to a model. This, in itself, can cause challenges, Dr. Fralick notes. Personal Identifiable Information (“PII”) and IP (Intellectual Property) can be exposed unintentionally. While Explainability is necessary, innovative solutions must always be made to ensure that all parties are protected, and the AI model is trustworthy.
Trust is critical for AI. Significant industry effort in standards, guidelines, and frameworks (e.g., ISO, IEEE, and NIST) have provided the tools necessary for establishing AI trustworthiness.They provide focus areas to monitor AI. Adding an AI Ethicist with an AI Ethics board are also recommended. Dr. Fralick envisions even more from AI trust and monitors – reliability of AI. “AI Reliability, measured in Mean Time To Decay (MTTD), is a recognition that all models will decay for some reason. And those “reasons” -whether bias, Explainability or anything that may cause model drift over time - are critical proactive monitors so you can eventually predict MTTD.” She believes that MTTD can be used in future industry comparisons so, as Consumers become more AI savvy, the use of this critical metric will become a necessity.
The mathematics of MTTD aren’t easy, Dr. Fralick says. But with her reliability modeling background in the semiconductor industry, she believes it can be done and will be ultimately good for the Consumer. And while she enjoys strategizing 3-5 years in the future, she relishes “number crunching” for the detailed challenge of the day, including MTTD formulas. But she emphasizes that personal responsibility for good security hygiene is imperative to augment the managed AI embedded in any security product.
Updating your phone and computer applications and changing your passwords frequently are the basics. But Consumers can go even further in protecting themselves. Dr. Fralick recommends applying age-appropriate security controls for families, as well as protecting home routers. Using a VPN service to connect to free Wi-Fi at the airport or local coffee shop can minimize “attack surfaces” that bad actors can actively exploit. Reducing vulnerabilities with updated security solutions is akin to locking front doors and backyard fences to deter would-be criminals.
Proactive security hygiene by Consumers combined with state-of-the-art AI in McAfee security products can be a fatal blow to bad actors, Dr. Fralick says. As a seasoned professional and trailblazer for technology, data science, and STEM for females, we believe her and her 3-5 year outlook for AI Reliability. We can’t wait to see what she innovates next! IE
Celeste Fralick, Ph.D.
Chief Data Scientist of McAfee
McAfee is a global organization with a 30-year history and a brand known the world over for innovation, collaboration and trust. McAfee’s historical accomplishments are founded upon decades of threat and vulnerability research, product innovation, practical application and a brand which individuals, organizations and governments have come to trust.