Hero Image
- Philipp Ludewig

Security Champion Day 2023

Aloha people,

I've spent my Friday, in the office not only because the free food Friday is back but also because I had the chance to listen to some great presentation of my colleagues of the InfoSec space at ThoughtWorks. Once a year we have the privilege to leave our accounts for a day to share knowledge and compete in a CTF. This year Katherine Jarmul kicked the day off as the Keynote speaker with her talk that focuses on hacking machine learning models. She is a great speaker and an inspiration in the field of Machine Learning. There is this saying: "turtle or a rifle". It refers to a hack for image prediction in which you paint a certain pattern on the shell of a turtle to hack a ML model. Through the increase of errors the model will recognise our small shelly friend as a dangerous weapon. Katherine explained how the hack works and topped this one with an example how this easily could happen to a self-driving car. I am already very suspicious on such cars and their capabilities. The example showed that by changing the light conditions of the input picture the ML model would either drive correctly or ram the car into the guardrails into the abyss. Uff that's super scary. I imagine a cloud could change the decision of the model. Moving on from image predictions to biases and data leaks. Another problem newly language models like chatGPT have is that they have biases embedded in them which cannot be safeguarded due to their nature being based on reinforcement learning. Nobody wants their trillion dollar language model to spit out fascist, racist or transphobic rethoric? There is a lot being done in the space to stop jailbreaks like DAN or the grandma but not nearly enough. The problem with that is bots could be used to manipulate the discourse even better with the improved LLMs. The YouTuber "second thought" has the problem described greatly why LLMs are supporting even more right-wing rethoric online (See here). This has already been adopted and bots swarmed Twitter a while ago. How should we know whether someone is discussing with a person or a LLM? These models are often trained on real-life data like Copilot and this causes the models to embed PII data in them. There is much to gain to get those information out of the model, be it emails, token or passwords. They can be used to hack other platforms and systems and steal money or take over accounts. Just put out your AWS account token to GitHub and see how fast bots will max out your billing capabilities. Tja what can be done? Encrypting everything is one example. May it be the training data, the model or the output. Nothing is trusted in this process. Additionally humans can still help here with adversarial training the language model. The last slides of the talk was about when or should we even use ML and would like to end this section with a quote:

"AI" isn't magic, it isn't intelligent, it isn't neutral. Use all of our human judgement, capabilities and intellect to decide when, how, why to use it.

The next talk from Bettina Weinholtz focused on learning about security in a gamified way. I was especially interested in this talk as I wanted to learn how to teach my team about security in a gamified way. The talk focused on capture the flag with some goodies like rewards, leaderboards, badges. She also mentioned quizzes but I am not sure how much will the participants remember. I recently tried out wheels of misfortune which is a great way to explore disaster incidents in a safe space through roleplaying like dungeon and dragons. Try it out, it was a massive success.

Followed up was the talk by Jeni Rodriguez on Personally Identifiable Information (PII) in application logs. She explained how to detect and prevent such logs. The main three points were restricting the access to the application logs in production, creating a logging strategy for application and detecting the respective logs at lower environments. I remember that I had created such logging strategy for a client in the past in which a Log Factory used pattern recognition to remove PII data before it was logged. I was a little surprised about myself that I hadn't thought about specific PII related alerts. This can be done automatically through filtering logs for specific regex/queries. This does not prevent logs with PII to be appear in the application logs but it detects them. I believe this is a great idea for platform teams to secure each teams logs in production. We as devs need to be paying special attention to logs as they can be created in various way such as:

  • Exceptions
  • get requests with sensitive query strings
  • data objects with toString method
  • Postgres logServerErrorDetail: configuration of Postgres is by default set to true

If you still need the PII information from the logs here is an idea for you: Store the logs in a S3 Bucket that is locked to everyone and everything except a AWS Lambda which is cleaning up the logs

Last but not least there was another talk about LLMs and how to use them in the field of InfoSec. In short there are three opportunities to use AI, all have their own caveats:

  • Alert on intrusion
    • a lot of logs are needed to create such a machine model for that
    • when do you know you are not training with data in which intrusion has happened?
    • match not only single action, but sequences
    • are you training on past incidents or "this is safe" time frames?
  • Code analytics
    • Vulnerability analytics on your code
    • guided search to avoid unsafe patterns while you code
  • Drafting tests
    • something like GitHub copilot or Google Bard
    • tests could be incomplete in its coverage

After the talks, I had the opportunity to participate in the CTF for the AWS Cloud. Although I've taken part in a few CTFs before, this one stood out from the rest. What set it apart was that each player was assigned an exploitation route mimicking the approach of a real attacker. As someone with little experience in attacking cloud systems, this unique format proved immensely helpful in gaining new insights.

During our race between the offices, we narrowly missed the victory, but the experience was incredibly enjoyable. The day turned out to be fantastic, and I must say that I learned a great deal in such a short time