• Explore
  • Blog
  • Podcast
  • About
  • Services
  • Contact
Menu

Exploring Information Security

Securing the Future - A Journey into Cybersecurity Exploration
  • Explore
  • Blog
  • Podcast
  • About
  • Services
  • Contact

Image created by ChatGPT

How AI Has Impacted Our Lives in the Last Year

May 21, 2024

In the past year, the field of AI has seen significant advancements and a greater focus on regulatory and ethical considerations. At ColaSec we will be talking about AI for our May meetup. This will be a group discussion like we had last year. We wanted to revisit AI and how our views and uses of it have changed. We have a virtual option available for those that can’t make it in person.

To prepare for the discussion I wanted to put this blog post together reviewing AI and how it’s impacted our society over the last year. Ironically, this post was created with help from ChatGPT.

ChatGPT weighs in

  • Generative AI's Expansion: Generative AI technologies have experienced widespread adoption and development. Companies increasingly use these capabilities not just for enhancing existing products and services but also for innovating new business models​ (McKinsey & Company)​.

  • Ethics and Regulation: There has been a growing global movement towards regulating AI more stringently. For example, the European Union has been active in proposing comprehensive AI legislation, and the White House has introduced an executive order aimed at setting standards for AI development and deployment​ (MIT Technology Review)​ (Goodwin)​.

  • Technological Innovations: On the technical front, major advancements have been made in areas such as AI explainability, privacy, and safety. New tools have been developed to improve the transparency and accountability of AI models, such as watermarking technologies for identifying AI-generated content​ (Google DeepMind)​.

  • Application in Science and Healthcare: AI's application in scientific research and healthcare continues to grow, with new models enhancing our ability to design novel proteins and materials, potentially revolutionizing fields like drug discovery and environmental sustainability​ (Microsoft Cloud)​.

  • Public Discourse and Awareness: The public discourse around AI has matured, with more widespread discussions on the ethical implications, societal impacts, and long-term consequences of AI technologies. This includes concerns about AI-generated disinformation and the privacy implications of widespread surveillance technologies​ (CSIS)​​ (Digital Watch Observatory)​.

  • Generative AI and Cybersecurity: AI technologies are playing an increasingly critical role in cybersecurity, automating tasks and enhancing threat detection and response capabilities. The dynamic nature of AI is likened to an arms race among companies, governments, and cybercriminals, emphasizing the need for vigilant cybersecurity practices​ (TechNewsWorld)​.

  • AI in Political Processes: The use of AI in political processes, such as elections, has become a significant concern. Issues like the generation and spread of political deepfakes are becoming more prevalent, complicating the political landscape and potentially influencing election outcomes. This underscores the urgent need for robust methods to detect and mitigate such AI-generated disinformation​ (MIT Technology Review)​​ (AI Index)​(Check Point).

  • Advancements in AI Applications: AI's capabilities continue to expand into more complex and nuanced areas, including multimodal interactions where AI systems can understand and generate outputs across text, image, video, and audio. This broadens AI's application in various fields but also raises concerns about privacy and the creation of synthetic content that can be difficult to distinguish from human-generated content​ (Tech Xplore)​.

  • Ethical Concerns and AI Bias: The integration of AI in sectors like finance, healthcare, and public services brings benefits such as increased efficiency and new capabilities. However, it also brings risks such as biases in AI algorithms, which can perpetuate existing inequalities. There is a growing emphasis on developing responsible AI practices to address these issues​ (AI Index)​.

These developments reflect a dynamic field that is not only advancing rapidly in terms of technology but also becoming increasingly intertwined with broader societal and regulatory frameworks.

Tim weighs in

  • Verizon DBIR: I recently read the latest Verizon DBIR and made a blog post about it. One of the takeaways was that AI hasn’t had a significant impact on attacks. It is helping with improving efficiencies of attacks but it’s not an action in itself yet. This may change or it may not. Attackers use the path of least resistance. Setup a scheme to attack people with deepfakes and voice impressions can be a bit more elaborate. Not to say that they aren’t out there. It’s just not as widespread.

  • Policy creation: One of the first things I did was create security policies for a small business. It took me just a few hours to create 10 security policies that the company was being required. They were concise and easy to read. I hope that security teams are paying attention as this will improve the quality of policies overall and make them much more consumable and easier to understand.

  • Building out ExploreSec.com: I’ve used AI to build out a large portion of this site. I’ve gotten a lot more done than I ever would have on my own. I can put up deep dives in less than an hour. I will go back and edit the initial output from ChatGPT. I’ve written a few blog posts with ChatGPT with varying results. I believe my better posts are going to be me and my stories and experiences. I did have one blog post get deleted accidentally after I wrote it. Instead of doing a full rewrite, I had ChatGPT write the article and I thought it came out very well. It’s been very useful for the podcast. I now use ChatGPT almost entirely to write my show notes. When I record I also transcript the conversation. I then take that transcript and have AI build show notes. It’s been an enhancement for show notes and streamlines my post editing process.

  • Creating Security Awareness Content: My new role is building out a security awareness program for a large healthcare organization. I’ve used ChatGPT to build out blog posts and create newsletter items. Smishing is my most recent blog post. Like the building out content on the site, I have it create the first draft and then make adjustments from there. This allows me to easily create regular content for our internal communication site while also educating people on different security topics. I’ve also started releasing a monthly newsletter for phishing threat intelligence and security awareness. I take articles I find online and have either ChatGPT or Gemini write a short newsletter item. With Gemini and Co-Pilot I could take the link and just feed it that instead of having to scrap the data. I found Co-Pilot to have the best repeatable format. Eventually I ran out of a free trial and it wanted me to login. It also got very uncomfortable when I was doing phishing research and it forced me off the topic. ChatGPT recently released 4o and it is now taking links and creating content out of it.

  • Scripting: I’ve found AI extremely useful for building out PowerShell scripts. One of the things I like to do in a new role is build out the metrics. This often means custom metrics that a platform doesn’t have reporting on. I’ve taken the raw data and created PowerShell scripts that massage the data into the metrics I want. The PowerShell created usually works the first time. If it doesn’t then I simply feed the AI the error. They usually start out being this simple script and quickly get more complicated as I think of more use cases for the script. I will be posting these scripts on my GitHub at some point.

  • Research: I’ve been using AI to help do research on topics. I still find that Google is better for some thing. AI is still several months behind on what it can provide but it’s getting better. Like creating content it’s a starting point for research. I’ve found in some of the topics I’ve explored in security it provides resources I’ve never heard of before but it can also be susceptible to marketing content. I would expect this will get worse as marketing teams figure out how to get their content into AI and a top result. Similar to how they figured out Google and other search platforms.

  • Image Generation: I’ve been extremely happy with the images generated by ChatGPT. I use it for blog posts where I can’t find images. Usually I feed it the content and ask it to make an accompanying image. I’ve also used it for my presentations when I can’t find a meme or visual that highlights the content. It’s not always great. It still struggles with words but I’ve seen it get better. The same prompt will give different results. Sometimes there’s one thing I don’t like and ask it remove it and it’ll create a whole new image. I’ve messed around with photoshop for a couple images but it usually ends up being more hassle than it’s worth. I just keep giving it prompts until I get something I want. Sometimes starting over and taking a different approach with the prompt is the best option.

  • Social Media: I’ve played around with AI for use on LinkedIn. Some of the posts it creates are cheesy. I primarily use it for podcast announcements. I need to play around with it more but I’ve started to move away from it. I have found that the view point for the prompt is big. It can get caught up creating words for a marketing team instead of someone with an idea or wants to comment on a blog post. This makes sense as I imagine marketing teams are using this to create social media posts on a more regular basis.

  • Presentations: This year I used AI to help build my abstract, bio, and outline for my presentation. I haven’t had it build my slide deck yet, but I’m toying around with it. The abstract and bio alone are huge for me as I’m not a great self-promoter. I was able to build out all three in 30 minutes. This used to take me several hours to put together. I also believe I’ve been accepted to speak more because of it.

I’ve found AI to be a valuable tool for content and scripting. It’s helped me build content for ExploreSec.com. It’s helped me improve my presentations both from a submission and content standpoint. I’m excited to get back into scripting to see what sorts of automation I can build for doing regular tasks like metrics. Looking ahead, I’m continuing to come up with use cases. My next project is to understand how to use voice AI from an attackers standpoint but also from a podcasters standpoint. There are some use cases that I think will enhance the podcast.

What are your thoughts on AI and how have you used it over the past year?

In Experiences Tags AI
Comment

Exploring the Verizon DBIR - Image created by ChatGPT

2024 Verizon DBIR Insights and Thoughts

May 13, 2024

The Verizon Data Breach Investigations Report (DBIR) for 2024 was recently released. It’s a must read of those in cybersecurity. It gives great insight into the overall threat landscape and then breaks it down by industry. Working in healthcare this is important because while ransomware grabs the news a bigger concern may actually be insider threat. This is highlighted even more this year with new requirements around reporting on security incidents and breaches insider threat and specifically the Miscellaneous Error category. My random thoughts from the report are below with a lean towards healthcare.

Insights and thoughts on the Verizon DBIR

Vulnerability exploitation on the rise

Exploitation of vulnerabilities tripled from last year. I’ve read similar numbers from other trend reports and it makes sense. As organizations get more controls in place such as Multi-Factor Authentication (MFA) and people get better at identifying phishing (later in the report) attackers will pivot to other ways of getting in. We’ve already seen a rash of vulnerabilities in network appliances over the last several months that could allow attackers into the network.

Human Element Calculation Change

Privilege misuse was removed from the human element calculation which means the human element metric dropped to 68% instead of 76% if it were kept in this year. I’m a little torn because I still believe it’s human element misusing privilege. The idea is to align their security awareness recommendation better. From that angle I get it because privilege misuse is more intentional regardless of security awareness training.

Added third-party vendor and supply chain issues

This is a good one to add. As organizations get better at defending attackers will look to get in via third-party vendor or supply chain issues. Which really isn’t a new concept see: Target breach or the Trojan War. A good third-party vendor risk management program is essentially to keeping organizational data secure.

Errors Increases due to mandatory breach notifications

Errors increased to 28% this year. Internal actors increased from 20% to 35%. Organizations that don’t have to report won’t. In healthcare if a breach is under 500 records then reporting doesn’t have to occur, so there’s even more Errors not being reported. I expect more regulation will make this number continue to grow for healthcare . This will hopefully highlight and shift focus to finding solutions to the insider threat problem. Yes, there’s Data Loss Prevention (DLP) but it’s a pain in the ass to get in place.

Meme created by ME!

Security Awareness is Improving

20% of people are reporting simulated phishing emails and 11% are reporting after clicking. That’s positive improvement. I also really like that the report focused on report rates and not clicking. Click rates can fluctuate depending on the difficulty of the phish and the time of year. Too much focus is put on clicking when what’s really needed is an improvement in reporting.

Reporting gives the security team an opportunity to respond to an incident sooner. I always tell people that clicking doesn’t bother me. Did they report it? It’s much easier to respond now, than several weeks later when there’s a bigger issue. Encouraging reporting, even when a click happens, also helps build a more positive security culture. We’re all human and make mistakes. I’ve fallen for my own phish before.

Generative AI Not as much of an issue as we thinK

It’s recognized that AI is helping attackers in writing phishing email and malware and being deployed in political campaigns but it’s not being used in way that is significantly contributing to breaches. This is why I love the Verizon DBIR. Despite the news headlines and play on social media AI and all the awful things it can do is not currently having a measurable impact. It’s certainly still something that needs to be discussed, understood, and controls put in place, but it may be better to focus on efforst that may make a more substantial impact such as vulnerability management and security awareness.

Distributed Denial of Service is the top action in incidents

This is where understanding the verbiage of the report is important. Incident vs breach. Breach is a loss of data. An incident is a security incident that may not involve data being stolen. Hence, DDoS isn’t about taking the data it’s about taking the service offline for an extended period of time. This shocked me a little. DDoS is still happening and it’s impacting a lot of organizations. Having mitigating controls and a plan in place to respond is important for any organization.

Jen Easterly comments on vulnerabilities and the need to shift focus

“...recurring classes of software defects to inspire the development community to improve their tools, technologies, and processes and attack software quality problems at the root.”

Quality code is secure code is something I’ve been preaching for years. If the quality is there then the security will be there. It’s in the documentation. When developers don’t follow best practices and the documentation that’s when vulnerabilities get created. The reason why security folks have a job is because people aren’t developing, coding, or configuring things right in the first place.

I like that Jen is taking a more broad view and it’s not something I’ve thought about. Instead of focusing on individual vulnerabilities or bugs we should go a level up. Every organization is different and every development team is going to have different issues with certain quality issues. We need to be looking at the class of bugs and trying to solve for the large grouping of vulnerabilities. This will help the development community identify where they can make improvements in their tools, technologies, and most importantly processes.

Social Engineering Section

BEC attacks had a median transaction of $50,000. They have a great graph that shows most organizations can get their money back by reaching out to law enforcement. I had a great conversation with Jayson E. Street recently on the Exploring Information Security podcast on social engineering and he had a great idea to send everyone involved in financial transactions a card with a code word on it. If that code word wasn’t authenticated then it’s very likely a BEC attack. I love the simplicity of the solution and I think it can make a good impact.

WEB APPLICATION ATTACKS SECTION

Credential stuff and brute force attacks are the most common against APIs. Authentication and authorization are the biggest issues for APIs, not so much injection vulnerabilities. This improves security but also means permissions should be top of mind when developing APIs. Things like MFA and rate limiting also need to be in place to help mitigate the potential of a breach. 1000 credentials are available online daily for $10. Credentials are cheap and easy to come by.

Free gaming currency lures lead malicious NPM packages was not something on my radar. This is the younger generation looking to make a fast bUck in the gaming landscape. Unfortunately, they’re downloading malware. Typo squatting was second. From the report it talked about packages checking external repositories before internal. It’s always better to try and build an internal repo system that pulls updates from the known good repositories. This is easier said than done.

Miscellaneous ERrors

This is often overlooked by organizations. Insider threat is the bigger concern in industries like healthcare where people are handling personal, health, and financial data. There’s a lot of data flying around. More than 50% was due to misdelivery which means people sent sensitive information to the wrong party and often non-malicious.

87% of users accounted for errors. System administrators go from 46% last year to 11% this year. System administrators largely accounted for internal threat issues due to misconfiguration. They’ve tightened up but it also highlights how under reported user errors were.

Data Loss Prevention (DLP) is huge to help prevent this. The problem is that DLP is a pain in the ass to implement. I hope that highlighting how big of an issue insider threat will encourage companies to try and tackle the problem in more creative ways.

Healthcare Industry

I’ve already talked a lot about healthcare above. Miscellaneous Errors regained the top spot after being second to system intrusions last year. I would expect system intrusions to continue to decline in next year’s report due to law enforcements increased involvement in taking down ransomware gangs. Privilege misues was second. This is the more malicious actions internal threat actors are taking. System intrusions were third.

Conclusion

The 2024 Verizon Data Breach Investigations Report (DBIR) is a must read. It provides critical insights into the evolving threat landscape, particularly emphasizing the increasing complexity of cybersecurity challenges across various industries. It’s a good anchor point for challenging assumptions about the biggest risk to our own organization.

As cybersecurity environments become increasingly complex, the DBIR’s insights are invaluable for professionals seeking to bolster their defenses and anticipate potential threats. The report serves not only as a tool for understanding but also as a catalyst for implementing robust security measures tailored to specific industry needs. For those in cybersecurity, especially in sectors as sensitive as healthcare, the DBIR is an essential resource that supports ongoing efforts to protect sensitive information and systems from both external and internal threats.

In Technology Tags Verizon DBIR, Healthcare, DLP, AI, security research, Trend Reports
Comment

Exploring the security awareness newsletter - Image created by ChatGPT

Security Awareness Newsletter April 2024

May 6, 2024

These are the stories I’ve been tracking that are of interest to people outside of security. Feel free to take this and use it as part of your own security awareness program. The items were created with the help of ChatGPT

Confirmed: AT&T Data Breach Exposes Millions

A large data leak containing personal information of millions of AT&T customers is being investigated. While AT&T denies the breach originated from their systems, this incident highlights the importance of protecting your personal information.

Here are some steps you can take to stay safe:

  • Be mindful of the information you share online and over the phone.

  • Use strong passwords and change them regularly.

  • Monitor your bank statements and credit reports for suspicious activity.

 

 

AI in Elections: Beware the Deepfakes!

AI is shaking up elections! Check Point Research warns of deepfakes and voice cloning being used to mislead voters. They found evidence in 10 out of 36 recent elections. Stay informed - the future of voting might depend on it!

 

Heads Up, Gamers! Malware Lurks in YouTube Video Game Cracks

Phishing for free games can land you in hot water!

A recent report by Proofpoint discovered threat actors using YouTube to distribute malware disguised as popular video game cracks.

Here's the breakdown:

  • Compromised Accounts: Hackers are targeting both legitimate and newly created YouTube accounts.

  • Deceptive Content: Videos promise free software or game upgrades, but descriptions contain malicious links.

  • Targeting Young Gamers: The campaigns exploit younger audiences' interest in bypassing paid features.

 

 

Alert on Privacy Risks in Dating Apps: Spotlight on Hornet

Recent investigations by Check Point Research have exposed critical privacy vulnerabilities in the popular dating app Hornet, affecting its 10+ million users. Despite Hornet's attempts to safeguard user locations by randomizing displayed distances, researchers found ways to determine users' exact locations within 10 meters using trilateration techniques. This finding poses a significant privacy risk, particularly in dating apps that rely on geolocation features to connect users.

Highlights:

  • Hornet's geolocation vulnerabilities could allow attackers to pinpoint users' precise locations.

  • Even after implementing new safety measures, locations could still be determined within 50 meters.

  • Check Point Research advises users to be cautious about app permissions and consider disabling location services to protect their privacy.

The study illustrates the ongoing challenges and potential dangers of balancing app functionality with user privacy, urging both developers and users to remain vigilant.

 

 

Ransomware Scams Can Get Creative

Ransomware gangs are constantly looking for new ways to pressure companies into paying up. A recent article on TechCrunch describes a hilarious (but ultimately unsuccessful) attempt by a hacker to extort a company through their front desk Ransomware gang's new extortion trick? Calling the front desk.

While this specific incident might be lighthearted, it serves as a reminder that ransomware attackers are always adapting their tactics. Here's what you should be aware of:

  • Be cautious of any unsolicited calls or emails claiming a security breach. Don't engage with the sender and report them to the IT department immediately.

  • Never click on suspicious links or attachments. These could contain malware that gives attackers access to our systems.

  • Be mindful of what information you share over the phone. Hackers may try to sound legitimate to gather details about our company's network.

  • Stay informed about cybersecurity best practices. The IT department may send out phishing simulations or training materials – take advantage of these resources.

By staying vigilant and following these tips, we can all play a part in protecting our company from ransomware attacks. Remember, if you see something suspicious, report it!

 

 

FBI Alert: Increase in Social Engineering Attacks

The FBI has issued a warning about the rise in social engineering attacks targeting personal and corporate accounts. These attacks employ methods like impersonating employees, SIM swap attacks, call forwarding, simultaneous ringing, and phishing, which are designed to steal sensitive information.

Key Techniques:

  • Employee Impersonation: Fraudsters trick IT or helpdesk staff into providing network access.

  • SIM Swapping: Attackers take control of victims' phone numbers to bypass security measures like multi-factor authentication.

  • Call Forwarding and Simultaneous Ring: Calls are redirected to the attackers' numbers, potentially overcoming security protocols.

  • Phishing: Cybercriminals use fake emails from trusted entities to collect personal and financial data.

How to Protect Yourself:

  • Ignore unsolicited requests for personal information.

  • Ensure unique, strong passwords for all accounts.

  • Contact mobile carriers to restrict SIM changes and call forwarding.

  • Regularly monitor account activity for signs of unauthorized access.

If Compromised:

  • Immediately secure accounts by changing passwords and contacting service providers.

  • Report the incident to the FBI’s Internet Crime Complaint Center at www.ic3.gov.

Stay vigilant and implement these protective measures to defend against these sophisticated social engineering threats.

 

Smishing Scam Hits the Road!

Beware of texts claiming unpaid tolls! Scammers are targeting drivers with smishing attacks. The texts claim that the recipient has unpaid tolls. Don't click links or give out info. Report scams to the FBI: https://www.ic3.gov/Home/ComplaintChoice. Stay safe!

 

 

Data Breach at Hospital: Ex-Employee Admits to Sharing Patient Records

Patients at Jordan Valley Community Health Center in Missouri are being notified of a data breach involving over 2,500 individuals. The culprit? A former employee, Chante Falcon, who admitted to accessing and sharing patient records.

Facing federal charges for wrongful disclosure of patient information, Ms. Falcon pleaded guilty and awaits sentencing. The potential penalty? Up to 10 years in prison.

 

 

Tax Time Trouble: Don't Fall Victim to Tax Scams!

It's tax season again! While you're busy gathering documents and filing your return, scammers are out in force trying to steal your money and personal information.

This year, security experts are seeing a rise in Artificial Intelligence (AI)-powered tax scams. These scams can look and feel more sophisticated than ever before, making them even trickier to spot.

Here are some red flags to watch out for:

  • Urgency and Threats: Scammers often try to pressure you into acting quickly by claiming you owe overdue taxes or face penalties.

  • Suspicious Emails and Texts: Be wary of emails or texts claiming to be from the IRS or tax software companies. Don't click on links or attachments unless you're sure they're legitimate.

  • Phishing for Information: Scammers may ask for your Social Security number, bank account details, or other personal information you wouldn't normally share via email or text.

Stay Safe This Tax Season:

  • Go Directly to the Source: If you receive a message about your taxes, contact the IRS directly using a phone number you know is correct (don't use the one provided in the message).

  • Don't Share Personal Information Unsolicited: The IRS will never ask for sensitive information through email or text message.

By following these tips and staying vigilant, you can protect yourself from tax scams and ensure a smooth tax season!

 

 

Tracking AI's Influence in Global Elections

Rest of World, a news organization, has launched a new initiative to monitor and document the impact of artificial intelligence (AI) on global elections. This effort comes as generative AI tools become increasingly accessible, presenting both innovative uses and potential risks in political contexts.

Scope and Objective: The project tracks AI incidents across the globe, particularly focusing on regions outside the Western hemisphere. From the general elections in Bangladesh to those in Ghana, the tracker will compile AI-generated content related to elections, encompassing both positive applications and problematic issues like misinformation.

Noteworthy Incidents:

  • In Belarus, a ChatGPT-powered virtual candidate is providing voter information while circumventing censorship.

  • AI-generated videos have enabled Pakistan’s former Prime Minister Imran Khan to address the public from imprisonment.

  • A spam campaign against Taiwan’s president has been linked to a Chinese Communist Party actor.

  • Deepfake videos falsely depicted Bangladeshi candidates withdrawing on election day.

 

 

Comprehensive ChatGPT Risk Assessment

Walter Haydock from StackAware has conducted an exhaustive risk assessment of OpenAI's ChatGPT. This summary encapsulates the critical findings and documentation from the assessment, aiming to enhance your understanding and governance of AI tools.

Key Findings from the Assessment:

  • Purpose and Criticality: ChatGPT serves multiple functions, from generating marketing content to converting unstructured data into structured formats. Its operational importance is significant, with potential major business impacts in case of system failure.

  • System Complexity and Reliability: Despite its complex nature, ChatGPT has shown reliable performance, although occasional performance and availability issues have been documented on OpenAI’s status page.

  • Environmental and Economic Impacts: ChatGPT's operation is energy-intensive, with considerable carbon emissions and water usage. However, it also offers potential economic benefits, potentially contributing significantly to global productivity and economic output.

  • Societal and Cultural Impacts: The system’s ability to automate repetitive tasks could liberate millions from mundane work but also poses risks to employment and misinformation, particularly during sensitive periods like elections.

  • Legal and Human Rights Considerations: The system's deployment must carefully navigate potential impacts on employment and privacy, with strict adherence to legal and human rights norms.

 

 

Deepfake Phishing Attempt Targets LastPass Employee: Audio Social Engineering on the Rise

A recent incident reported by LastPass sheds light on a concerning trend: the use of audio deepfakes in social engineering attacks.

What Happened?

  • A LastPass employee received a series of calls, text messages, and voicemails supposedly from the company's CEO.

  • The voice messages utilized deepfake technology to convincingly mimic the CEO's voice.

  • The attacker attempted to pressure the employee into performing actions outside of normal business communication channels and exhibiting characteristics of a social engineering attempt.

Why This Matters:

  • This incident marks a potential turning point in social engineering tactics. Deepfakes can bypass traditional email-based phishing attempts and create a more believable scenario for the target.

  • Audio deepfakes pose a significant threat because they exploit the inherent trust we place in familiar voices.

How LastPass Responded:

  • The targeted employee, recognizing the red flags of the situation, did not respond to the messages and reported the incident to internal security.

  • LastPass highlights the importance of employee awareness training in identifying and reporting social engineering attempts.

 

 

Change Healthcare Cyberattack: A Costly Reminder for Physicians

A recent cyberattack on Change Healthcare, a major healthcare IT provider, has had a significant impact on physicians across the country. According to a KnowBe4 article, a staggering 80% of physicians reported financial losses due to the attack. United Health announced the attack cost them $1.6 billion alone.

The High Cost of the Breach

The article details the financial strain placed on physician practices:

  • Revenue Loss: Disruptions caused by the attack made it difficult to submit claims and verify benefits, leading to lost revenue.

  • Increased Costs: Extra staff time and resources were required to complete revenue cycle tasks.

  • Personal Expenses: Some practices were forced to use personal funds to cover business expenses.

 

 

USPS Now the Most Impersonated Brand in Phishing Attacks

Phishing attacks are one of the most common cyber threats. Criminals impersonate well-known brands to trick people into giving up personal information. According to a recent report, the United States Postal Service (USPS) has surged to the top spot on the list of most impersonated brands.

Here are some tips to avoid falling victim to a USPS phishing attack:

  • Be wary of emails or text messages that claim to be from USPS about a delivery issue or package requiring additional fees.

  • Do not click on any links or attachments in suspicious emails or text messages.

  • If you are unsure about the legitimacy of an email or text message, contact USPS directly.

  • Be mindful of the sender's email address and look for typos or inconsistencies.

By following these tips, you can help protect yourself from phishing attacks.

 

In News Tags Security Awareness, Newsletter, AI, Deepfake, Malware, Phishing
Comment

AI security and healthcare - created by ChatGPT

Embracing AI with Care: A Guide for using AI in the healthcare workplace

April 10, 2024

This is an article I put together for internal communication on my companies intranet. I actually put two different articles together. Both are along the same lines just written different. I would love feedback on anything I may have missed. Otherwise feel free to use this as part of your company’s internal communication. This was most written by ChatGPT.

Introduction

In the rapidly evolving world of healthcare, Artificial Intelligence (AI) has emerged as a beacon of hope and innovation. From improving patient outcomes to optimizing operational efficiencies, AI's potential is undeniable. However, as we integrate these powerful tools into our daily operations, it's imperative to approach AI with a blend of enthusiasm and caution.

The Power of AI in Healthcare

AI's application within healthcare spans from predictive analytics in patient care to automating administrative tasks, allowing healthcare professionals to focus on what they do best—caring for patients. AI algorithms can analyze vast amounts of data to predict patient deterioration or optimize treatment plans. Additionally, AI-driven chatbots can enhance patient engagement and support, providing timely information and assistance.

Ethical Considerations and Patient Privacy

While AI can significantly improve efficiency and patient care, its implementation in healthcare comes with profound ethical implications, especially concerning patient privacy and data security. As stewards of sensitive health information, it's our collective responsibility to ensure that AI tools are used ethically and in compliance with all applicable laws and regulations, such as HIPAA.

  • Transparency and Consent: Patients should be informed about how AI might be used in their care, including the benefits and potential risks. Obtaining informed consent is not just a legal requirement; it's a cornerstone of trust.

  • Data Privacy: Always ensure that AI systems handling patient data are secure and compliant with data protection laws. Anonymization of data before AI analysis is a critical step in safeguarding patient privacy.

  • Bias and Fairness: AI systems are only as unbiased as the data they're trained on. It's essential to continuously monitor and evaluate AI tools for any form of bias, ensuring equitable healthcare outcomes for all patients.

Cybersecurity Implications

The integration of AI into healthcare systems increases the complexity of our cybersecurity landscape. AI can both bolster our cybersecurity defenses and represent a novel vector for cyber threats. Therefore, a proactive and informed cybersecurity approach is essential.

  • Adherence to Security Policies: All use of AI technology must comply with our comprehensive security policies, which are designed to protect both patient data and our IT infrastructure. This includes strict access controls, regular security audits, and adherence to best practices in AI ethics and governance.

  • Education and Awareness: Employees must be educated about the potential cybersecurity risks associated with AI, including social engineering attacks that leverage AI-generated content.

  • Handling of sensitive data: It is crucial to ensure that sensitive data is not entered into or processed by AI systems that are not under our direct control and that do not meet our strict security and privacy standards. Employees should avoid the use of unauthorized AI tools and platforms that could inadvertently expose sensitive patient information or proprietary data. This includes being aware of third-party companies that have integrated AI into their platforms.

  • Secure AI Development: AI systems must be developed and maintained with security in mind. Threat modeling helps to identify potential issues before they arise. Regularly updating and patching systems helps maintain the integrity and security of systems.

  • Vigilance and Reporting: Employees are empowered to report any suspicious activities or vulnerabilities. Early detection is key to preventing cyber incidents or data privacy issues.

Looking Ahead

As we journey forward, integrating AI into our healthcare practices, let us do so with a vigilant eye on the ethical, privacy, and security implications. By fostering a culture of responsible AI use, we not only protect our patients and their data but also contribute to the advancement of healthcare, making it more accessible, efficient, and effective for all.

Conclusion

The integration of AI in healthcare represents a frontier of endless possibilities. Yet, as we harness these technologies, we must navigate this terrain thoughtfully and responsibly, ensuring that we remain steadfast in our commitment to patient care, privacy, and security. Together, we can create a future where AI empowers us to deliver better healthcare than ever before.

In Advice Tags AI, Healthcare, Security Awareness
Comment

Exploring the newsletter below - Image created with the help of ChatGPT

Security Awareness Newsletter March 2024

April 1, 2024

This is a security newsletter I’ve put together as part of our security awareness program. This leans more towards healthcare and news items that are more general in nature. I’ll have a more technical focused newsletter later this week that’s targeted at security teams. Feel free to take this newsletter and use it internally as part of your security awareness program.

The Great Zoom-Skype-Google Masquerade: Beware of digital doppelgängers. Fake Zoom, Skype, and Google Meet sites are the latest traps set by cyber tricksters.  These spoofed meetings can trick users into downloading harmful software that compromises their computer. Ensure you’re clicking on the real deal to keep those malware masqueraders at bay. Beware of QR codes that will try to steal credentials as part of this type of attack. 

Beware of fake websites mimicking popular brands!: Typosquatting attacks are surging, and cybercriminals are exploiting user mistakes to steal login credentials and spread malware. Typosquatting is where an attacker registers a similar domain to one a person is familiar with. This increases the chance a malicious link will be clicked. 

Small Businesses Hit Hard by Cybercrime: Some social engineering techniques highlighted in the article include: malicious ads; attackers starting a conversation before trying to get the person to take an action; and the move to PDF attachments. These types of attacks help launch ransomware against small businesses. 

Beware of AI-Driven Voice Cloning in Vishing Scams: The Better Business Bureau (BBB) has issued a warning about the rise of voice phishing (vishing) scams utilizing AI-driven voice cloning technology. Scammers can now mimic voices convincingly with just a small audio sample, leading to fraudulent requests for money transfers or sensitive information. Tips to Stay Safe: 

  • Pause Before Acting: Resist the urge to act immediately on unexpected requests, even if they seem to come from a familiar voice. 

  • Verify Directly: Contact the supposed caller using a known, saved number—not the one provided in the suspicious call. 

  • Question the Caller: Ask specific questions that an impostor would struggle to answer correctly. 

  • Secure Your Accounts: Implement multi-factor authentication and verify any changes in information or payment requests. 

Update on Change Healthcare Cyberattack Recovery: Change Healthcare is on track to bring its systems back online by mid-March following a cyberattack that has caused widespread disruption since February 21. The cyberattack has significantly affected healthcare operations nationwide, with providers facing difficulties in payment processing, insurance verification, and clinical data exchange. This highlights why security awareness is so important. Identifying and reporting security threats to the organization is the responsibility of everyone. 

Beware of Tax Season Scams Targeting SMBs and Self-Employed Individuals: As tax season unfolds, a new scam has surfaced targeting small business owners and self-employed individuals. Scammers are using emails to lure victims to a fraudulent site, claiming to offer IRS EIN/Federal tax ID number applications. However, this service is free through the IRS, and the scam site is designed to steal personal information, including social security numbers, creating a significant risk for identity theft and fraud. A Microsoft report identifies green card holders, small business owners, new taxpayers under 25, and older taxpayers over 60 as prime targets for these scams. Check Point has some example phishes in their tax scam article. 

Apple Users Beware: "MFA Bombing" Phishing Attacks on the Rise: Leveraging Apple's password reset system attackers can bombard users with password reset prompts. If a person clicks "allow" on one of the prompts, the attackers can gain access to the user's account. The attackers may also call the person pretending to be Apple support. Some ways to protect yourself from this attack include not clicking on any of the prompts and contacting Apple directly if you receive a suspicious call. 

In News Tags newsletter, Security Awareness, social engineering, Typosquatting, AI, Healthcare, tax fraud, Multi-Factor Authentication
Comment

Exploring the job market with my handy briefcase

Exploring the cybersecurity job market from late 2023 to early 2024

March 13, 2024

A job search is work

Below you will find several log entries from me as I recently went through a job search. I wanted to do this to highlight how things have changed and show that even for someone who has several years of experience it’s tough. I started my search around the end of November and had it end in early March. The holiday’s certainly slowed things down but it still took a good three solid months. Getting hired at the end of a year is a rare thing because companies aren’t looking to add more to their books. Their focus is to close out the books and look as good as possible from a financial standpoint.

A lot more job posting went up at the beginning of the year and things seemed to pick up from a reach out and interviewing perspective. The job I eventually accepted had their posting up in early December but didn’t start talking to me until the beginning of the year.

I cater my resume to the role and despite all that I still got A LOT of rejection letters. In fact I just got another one yesterday. Prepare for baseball type of stats where it’s normal to bat .300 instead of .800. I did notice that it’s less likely a company will talk to you if their not in their city. Through my network I heard this quite a bit despite my willingness to relocate to certain parts of the country. Talking to some recruiters it was certainly a weird market with a lot of companies wanting to be back in office and with the layoffs last year it was harder to stand out.

Another factor is my background. I have a broad background and have successfully implemented programs in multiple disciplines. I have confidence I can adapt my skillset to any role. I’ve done it in just about every job I’ve had. Unfortunately, a lot of hiring managers are looking for a specific skillset and only that skillset. Recruiters are another layer where they often are just looking for keywords in a resume. I also found that AI was starting to play a part. I had a screening call that utilized AI. I tried to better understand how that worked on the backend but couldn’t find a lot of materials. I’d like to see how AI is impacting candidates both positively or negatively.

Last year I took some time to reflect on what I really wanted to do and where my background and skillset could really be useful. I found that security awareness was something I’ve done at all my previous jobs and that there were companies hiring and paying well enough for the role. That’s where I focused my job search and that’s where I’ve ended up. I’m excited for what’s ahead. Below is my journey to that role.

Log

Entry 1: Willo and one-way video interviewing. This was an interesting experience because I was given a set of questions and asked to record my responses. I’ve never done this before and found it interesting. I had three minutes to record. I could save and continue or re-record. There was only one question I needed to re-record multiple times either because I ran out of time or screwed up. I thought it was a great way to do a screening. I also loved that the screening involved behavioral questions. Which I’m a big proponent of using.

Entry 2 (five days later): To this point I’ve applied to 16 roles: I’ve got one early stage interview setup; I’ve had one one-way video screening; and two, “we think you’re a great candidate but we don’t want to talk to you.” The last one I know one of them was due to pay because they reposted and took out the top part of the salary range and the other probably my resume. The one early stage interview I have is due to knowing someone at the company who put me in for a role. Which is why I always recommend networking to find a job.

I haven’t had to do a job search where I submitted blindly to companies for over 10 years. This is an experiment for me. Is my resume just not up to snuff anymore or is there some other factor. A couple factors I’m keeping in mind is that it’s the end of the year which means deadlines and goals. People outside of government work are usually pretty busy trying to wrap up the year and so hiring takes a back seat. Financially, people aren’t looking to add budget to their team at the end of the year.

It’s also been a tougher job market with the economy being down. I’ve talked to recruiters and they say it’s been a slow weird end of the year. There’s more competition for me in the job market so I’ll get less looks or get looked over. I’m also being more picky about the opportunities I apply for because I feel like I know what I want to do. My experience can be an issue because it’s a little all over the place. The closest I came to niching was application security but two years into that role I was promoted to manager over security engineers, pentesters, and application security.

Which brings me back to my resume. When I redid it over 10 years ago it was due to not getting call backs. It ended up taking 15 months to find a new job. Redoing it to the current format increased my interview opportunities by 50%. My resume format may be dated. My theory is that my resume may work for hiring managers but not for recruiters or talent acquisition people because they’re not in the field. They’re looking for those specific words and probably something more eye appealing. I’ve already started experimenting with different formats and I’ll provide the results here when it’s completed.

Entry 3 (Star Date -299052.05): The rejection emails have come in. I got two this morning and I expect more if I haven’t been reached out to by a recruiter. This means my resume is a problem and I need to work on that. I watched this talk from BSides San Francisco 2023 by Zach Strong on Hacking the Hiring Process. I think I need to simplify my resume and get it back down to under two pages. My master resume is currently at five pages. When I customize it to the job role it get’s down to four pages but I think I still need to cut that in half. Next role that I’m interested in, I’ll have to be brutal with my cuts. The last few I have added a new section called, “Applicable Qualifications” or “Applicable Experience” to try and highlight what makes me a potential candidate. We’ll see if that helps.

Ultimately, networking is still the best way to get in front of the hiring manager. I’ve gotten in front of one. Had the interview and then haven’t heard from them in about a week. This is unfortunately typical and disappointing. I’ve had enough of these that the behavior doesn’t bother me as much anymore. I’ve probably eliminated myself but it’d still be nice to be told that and given any feedback on what I’m lacking.

Entry 4 (some time later): More rejection letters have come in. I’ve gotten my resume down to two pages. I’m not sure the format is great but I like it and I’d like an organization that would want that kind of format. That’s me being naïve though and I’ll end up changing it. I want to make small tweaks just to see if I start getting more screening calls.

I did recently talk to someone else doing a job search and they said it was tough. They had read an article or something on reddit where someone had applied to 500 jobs. Got 20 call backs and two offers. I think it highlights the current state of the job market. It’s tough but I feel like I’m starting to see more posts go up and as people start ramping up for 2024.

To be continued…

Entry 5 (later): I got the rejection email from the place that had me do a one-way interview. I noticed it mentioned AI in the email and now I’m curious what that actually means for the hiring process.

Ignyte AI is the tool that was used for the screening. Looking it up there’s not a lot of information on it other than marketing material. Definitely something to explore in the future. Here are some links I found on it.

https://www.ignyteai.com/

https://huntscanlon.com/recruiting-platform-ignyte-ai-launches/

Entry 6 (Happy New Year!): I got a screening call setup for a position I applied for a few weeks ago. Hiring slows down during the holiday pretty significantly. Either the talent acquisition people are out or the hiring people are out or both. I’m hoping thinks pickup thought I expect I’ll continue to get rejection letters.

Entry 7 (busy): I’ve been focusing on getting podcast and blog posts produced and published so this has gone by the wayside a little bit. Screening call and interview with the hiring manager went well. I am setup for another interview with a panel of people and then a decision will be made. I have gotten more rejection letters, but I also recorded and published a really interesting podcast with Erin Barry from Code Red Partners.

I learned a couple things from the conversation. As I suspected it’s a weird time to be looking for a job. Networking is still king but there’s also some really crappy things that organizations do. They’ll put up a posting just to see what the market. There’s also people just looking for keyword searches and not getting anywhere near your resume. One of the key points she made was not getting down on yourself as part of the process. There’s a lot of factors that go into an opening that we just don’t see.

As part of another recording session I had, the guest pointed out to me that my LinkedIn page needed some work. I followed their recommendation around adding a banner and cleaning some other stuff up. Today I got a call from a recruiter for a director cybersecurity position in my area. Not sure it’s a great fit but the resume is off and we’ll see if we ever hear anything back.

Entry 8 (end of January): I just had a final interview for the one position that has progressed significantly. I’m still in for another position that I started the conversation in early December but it’s been very quiet. Talking with the hiring manager it sounds like a lot of internal politics and a question about remote work. The position is unfortunately up north and a region that is off limits for my family. I am still looking at job postings and applying to the ones I find interesting. I have also reached out to a recruiter about one position but haven’t heard back from them.

I like the idea of reaching out to recruiters and feel I should have done it before but I imagine some of them may not get back to me because they’re busy. I have seen encouraging signs though for the market with recruiters seeing there’s more jobs being posted. There are also more people getting back into the job market hunt so I would expect it’s still a competitive market. The place of my final interview is local. I have an advantage there because the discussion around relocation won’t be necessary.

Entry 9 (beginning of February): Shortly after my final interview for one position, I had another one start with a screening. That has progressed to another panel interview that I’m still waiting to hear back on. I still have not heard anything from the one I had a final interview on. I’m okay with that because I’m still in process on a couple other things and I continue to find security awareness positions being posted. It seems to be a position that a lot more companies are looking at and that hopefully means I can land in one. I haven’t really talked about it here but security awareness is where I want to head with my career. several years ago it was an addon to GRC or other roles. I did it as a passionate project but that were was never the thought of it being a full time gig. I’m happy to see this because I have the experience, knowledge, and desire to be successful in this discipline. It’s now just a matter of convincing someone else I’m right for the job.

I will say the waiting is a bit frustration. Even if things are being lined up a yes or not would be fine with me because it allows me to adjust and something I’ll talk about more in a future blog post. I did have some progression on the first position where I’ve had some conversations. That’s actually shifted to a discussion on being a contractor and would significantly help me with continuing down the self-employed path.

One other item I want to talk about is using AI to prepare for an interview. I took the job description and information I got from the recruiter and had ChatGPT create me some interview questions. I then wrote the questions on one side of a notecard and my answers on the other. Then I practiced the question and answering the question out loud. This is something I’ve always done for interviews but AI helped me create the questions a lot easier and made them applicable to the questions I get accessed. I had a technical assessment on the panel interview. I suck at technical questions in interviews. I always overthink them. I didn’t do great but the idea that came from that experience was to use AI practice for the technical assessment in an interview.

Entry 10 (later that week): Got a call this morning for one job and my salary requirements. Also got an email about not moving forward in another interview process because of the competitive talent pool. I’ll address both below.

Salary requirements are always an interesting thing for me. I am not a person that is motivated by money. I’ve reached all my financial goals and so the range I’m in now. I’ve been told I can go make 200k easily and have several peers that do. I don’t need that much money. The problem with telling people that though is that I get the sense they feel bad and then don’t give me the work I need to stay busy. So I’m in this weird balancing act of taking less money or making my requirements higher. I’m always willing to negotiate lower if it’s a position I’m interested in. I’m also very likely overthinking it.

It’s tough getting a notice that I won’t be moving on in a process or another candidate was selected. I got no feedback other than it was a competitive pool of candidates which I have no doubt there are. I was told salary was not a factor in the decision. This is the part where I need to remind myself that I may have interviewed well but the decision could have been any number of factors out of my control. Someone may have been referred. There may have been an internal candidate preferred. The process may have not been set up to allow me to shine properly. It could have been any number of things. I would have still liked to get more feedback because I want to improve but I’ve said the same thing to other candidates. I had multiple people and liked both and one just edged out the other for whatever reason. The one thing I knew I could have been better on was the technical assessment. I have played around with AI a bit and I think it would be very useful for practice for a technical assessment. I will have a future blog post on the topic.

Entry 11 (last one): I did get a job offer the next week and I’ve started the onboarding process, which is why I haven’t updated this post until now. I start next Monday and this post will be up shortly after I start. The onboarding process has been good. I think a lot of organizations have embraced automations and using platforms to onboard people. This is a good thing and it seems like I’m getting a lot of the stuff I need lined up ahead of time. I’ve also got my first day orientation schedule which is nice to have and know ahead of time.

I’m excited for this opportunity. I’ll be focusing on security awareness for my career which is a role that wasn’t around a few years ago. Organizations seem to be taking security awareness a lot more seriously instead of it being just a checkbox. I’ve been doing security awareness at organizations as a passion project for years, so it’s nice to have a role where I can just focus on that. I’ll be writing more about it more in other blog posts and probably talking about it on the podcast. While I have a full-time job now, I do plan to continue to producing content on this site.

In Experiences Tags hiring, interviewing, job search, job postings, AI
Comment

Logs somewhere cold

Exploring Information Security - Change Log - February 22-29, 2024

March 1, 2024

This is a log of changes to the site over the last week.

New pages:

Zero Trust - Deep Dive - Getting deeper into Zero Trust

Podcast posts:

What cybersecurity tools every organization should have - Hacker Historian Mubix joins me to discuss useful tools for security

Blog posts:
Impressions from the 2024 Palmetto Cybersecurity Summit - Thoughts from last weeks conference

7 Tips and Best Practices for Threat Modeling - Some of the tips and best practices I do to make threat modeling efficient and effective

Leveraging AI to Prepare for an Interview - My experience and some ideas around using AI to prepare for an interview

Subscribe

Sign up with your email address to receive news and updates.

We respect your privacy.

Thank you!


In Website Tags website, change log, AI, Threat Modeling, Zero Trust
Comment

The five stages of cybersecurity grief from Mathieu Gorge at the 2024 Palmetto Cybersecurity Summit

Impressions from the 2024 Palmetto Cybersecurity Summit

February 26, 2024

Last week I had the pleasure of attending the 2024 Palmetto Cybersecurity Summit in Columbia, SC. It was a great conference with a good venue and really great speakers. The keynote speakers brought a really great insight and of course the hot topics was artificial intelligence (AI). I’m hoping to attend again next year!

Prior to the conference I presented at ColaSec which is a local cybersecurity user group that I helped start about 10 years ago. I gave my threat modeling talk that I presented at the conference the next day. I like using ColaSec as a first run for my talks because I get a lot of really great feedback to refine the talk. You can watch the talk on ColaSec’s YouTube page. I adjusted the acronyms section and made some other minor adjustments to make the talk flow better. That helped for the conference the next day because I realized I had 10 less minutes for my presentation due to a reading error.

What I’m really excited about for this years conference is doing a demo of a live threat modeling session. I have about 20-25 mins of content and then we get into the demo. I like it because I want people to get a feel for how a threat modeling session should flow. I am planning to switch up the demo for each talk so that each version is a little different.

One of the things I rate conferences on is the drinks and food. I’m happy to report that the conference got an A in both regards. They had tea which is great because I’m not a coffee drinkers and the food was pretty good. Sometimes you go to a conference and the food is just meh or in a box. This was not the case for this conference. The other thing to call out is the chairs. Big comfy adjustable chairs. You could spend all day in those chairs.

The keynotes were really great. Mathieu Gorge talked about cybersecurity from a broader global level and the 5 Pillars of Security Framework. The picture above is the five stages of cybersecurity grief. William MacMillian was the former Chief Security Information Officer (CISO) at the Central Intelligence Agency (CIA) and he talked about his experience taking over there right before Solarwinds came out. He also talked about platform centric vs best-in-breed and how platform can provide simplicity to security teams that live in a world of complexity. Both provided some different perspectives and insights on the cybersecurity landscape and dropped some thought provoking ideas.

The majority of talks I attended were around AI. Before I get to that though I also went to Michael Holcomb’s talk on industrial control systems (ICS/OT). He gave some really good insights but more impressive he put together free ICS/OT courses on YouTube for people looking to get into the ICS/OT space.

The second day was filled with talks on AI. That will be a thing throughout this year and potentially for the next 2-3 years. I love that it’s something new to learn. A lot of the conferences I’ve attended in the last few years haven’t really provided me with the opportunity of learning new things. A lot of the talks just confirmed my own ideas and thoughts around security topics. Nothing really challenged those ideas either. There is value in confirming my knowledge and experiences but I want to continue to learn. AI is that current topic.

Dr. Sybil Rosado talked about the social engineering aspects of AI. While she talked about some of the malicious uses of AI she was a big proponent of using AI and learning how to work with it. She’s a professor at Benedict College in Columbia, SC, and has seen students using it. She actually likes that it’s making the writing better. Dr. Donnie Wendt talked about deepfakes and how they’re playing a role in the world today. It’s super easy to use and get started with. My own thought is that deepfakes are a great way to improve a security awareness program simply by talking about it and showing some examples. Plus there are already attacks where someone is using AI to imitate a voice and ask for money to be sent. Finally, Tom Scott talked about managing your security program with AI. One nugget that really stuck with me was that AI does not remember your interaction in a new chat. To continue to train it you need to keep the same chat.

The conference was a really great start to the year for conferences. I learned some new things, got to meet some new people, and catch up with some people I haven’t seen in a while. I’d definitely recommend checking it out for next year. Talking to one of the organizers it sounds like it’s going to get even bigger.

In Experiences Tags AI, Security Conference, ICS/OT
Comment

Interesting security reads: AI, Typosquatting, and Okta

December 5, 2023

Increasing transparency in AI security - Google Security Blog - Interesting article on AI security and how it falls pray to the same supply chain attack as the development lifecycle. It goes over how Sigstore and SLSA can help improve the security of the AI development lifecycle.

Have I Been Squatted - This is from the Risky Biz News and looks like a very interesting tool for companies looking to identify if they have any domains being typosquatted that could be used for phishing attacks.

The Okta story continues - Krebs on Security - The plot thickens. All Okta customers were impacted by the breach. Full name and email address were stolen. This is valuable information for attackers looking to phish IT administrators that have permissions into their Okta tenant.

IceKube - WithSecure Labs - This is an interesting tool recently released that checks Kubernetes environments for attack paths. Then it provides a graph as a visual that allows you to see the attack path. This could be very useful for teams looking to understand an environment.

Guidelines for secure AI system development - National Cyber Security Centre UK - AI is a bit of the wild west at the moment but as governments get a better handle on the technology they’ll start putting regulations and controls in place. Guidance is usually the first step and it’s worth paying attention to if products or companies are starting to use AI in a specific company or globally.

This blog post first appear on Exploring Information Security.

In Technology Tags Newsletter, AI, Okta, Kubernetes, Open Source
Comment

The future of AI and security

September 25, 2023

Artificial Intelligence (AI) is quickly changing the landscape for all of our society. It will significantly change our way of life over the next 10 years similar to how computers and mobile devices impacted our lives. If you’re not getting familiar with it now you may get left behind. This website is really only possible because of AI and more specifically ChatGPT. I’m able to crank out articles and information way faster than if I were creating the website entirely by myself.

I note all the pages I’m creating with the help of ChatGPT at the bottom so people know when it’s me and when it’s AI. I’ll be doing the blog posts and AI will be helping me build out all the other pages. You’ll probably notice the difference pretty quickly. I’m noting because I expect laws to come out in the future that require disclosure if AI was used in the creating of content. This is similar to how bloggers had to disclose if they were getting money from an entity as part of a post or other content on their website. Let’s dive into the predictions.

The government will regulate AI

As mentioned above the government will step in to ensure AI is being used in an ethical way. I’m curious how using AI to create things will hold up in court around topics such as copyright and data usage. I was hesitant to create an entire website and other documentation using AI because I don’t know if it would be considered plagiarism or copyright infringement. Amazon recently came out and limited self-publishing books to three a day. I think there are unforeseen things that will end up in discussion around AI and it’s use that will require regulation.

With any document being able to be feed into AI there’s a question for companies around sensitive data being leaked. This can be intellectual property and more concerning people’s personal information. As we see incidents where AI is leaking this type of information the government will step in and adjust laws an regulations, if not make new ones.

Creators will shift from writing to editing

This includes people like developers who are already using ChatGPT to write code. While AI is not any good at secure code review it can help developers get started with writing their own code. This can be a good thing as long as developers use it as a starting point and don’t just shove it right into production.

There’s no reason not to use ChatGPT as a first draft for things. I’ve written security policies for a company with just a couple hours of using ChatGPT and editing the output. This can be a good thing for smaller companies who don’t have a security team. Also, ChatGPT is able to write things in a much easier to understand format. Reading company policies may get a bit easier. Which leads into the next predication.

This will disrupt documentation

If you’re in Governance Risk and Compliance (GRC) or some other discipline within security that focuses on documentation it’s a good idea to start getting familiar with ChatGPT. There are people already out there using it and their output is going to be significantly more than anyone not using ChatGPT. GRC will need fewer people to complete their work. The ones who embrace it will stay because their productivity level is higher.

Summary

AI is a step forward and I think it’s going to help in a lot of ways. Yes, there will be some bad things and misuses that occur but overall it’s progress for our society. People creating within the tech space will see the biggest benefit. It will reduce the amount of time it takes to get a written piece of code or document out the door.

As far as securing the data their will be the usual growing pains when a new technology becomes easily accessible to everyone. Guardrails and guidelines will need to be put around the data as leaking the data is the biggest concern for AI. It’s benefits though could be significant and so security will again have to balance innovation with keeping people’s information safe.

This blog post first appear on Exploring Information Security

In Technology, Advice Tags AI, Predictions
Comment

Latest PoDCASTS

Featured
Jul 15, 2025
[RERELEASE] What are BEC attacks?
Jul 15, 2025
Jul 15, 2025
Jul 8, 2025
[RERELEASE] How to crack passwords
Jul 8, 2025
Jul 8, 2025
Jul 2, 2025
[RERELEASE] How to find vulnerabilites
Jul 2, 2025
Jul 2, 2025
Jun 24, 2025
[RERELEASE] What is data driven security?
Jun 24, 2025
Jun 24, 2025
Jun 17, 2025
[RERELEASE] What is a CISSP?
Jun 17, 2025
Jun 17, 2025
Jun 10, 2025
[RERELEASE] From ShowMeCon 2017: Dave Chronister, Johnny Xmas, April Wright, and Ben Brown talk about Security
Jun 10, 2025
Jun 10, 2025
Jun 4, 2025
How to Perform Incident Response and Forensics on Drones with Wayne Burke
Jun 4, 2025
Jun 4, 2025
Jun 3, 2025
That Shouldn't Have Worked: A Red Teamer's Confessions with Corey Overstreet
Jun 3, 2025
Jun 3, 2025
May 28, 2025
when machines take over the world with Jeff Man
May 28, 2025
May 28, 2025
May 20, 2025
How to Disconnect From Cybersecurity
May 20, 2025
May 20, 2025

Powered by Squarespace