_scurity

OWASP AppSec Dublin 2023 - Day 2

Previously I said threat modelling was a big theme at the conference, but I think embracing failure would be another. Lots of presenters shared their failures and how they learned from them and grew their programs from those lessons learned. This is something I’ve really got behind recently. Another thought I had when people presented especially around solutions was what’s the role of the product/application security team?

A lot of teams say “we build our own tools” which can lead to the product security team becoming a development team because once you start building things the team owns it and have to maintain them, even enforce the rules they set on other teams (practice what you preach) like security scans, threat modelling, etc. Also, if the software is integral to the business then you’re on the hook if there are any bugs or your software is slowing down teams or pushing out products. What about when the one or two people who built the software leave? Is there talent on the team to take over or does the business hand over the software to a development team?

A lot more common is running and maintaining security tool “we run SAST, DAST, SCA, container scanning, we automate threat modelling, etc. etc.” this can lead to the product security team becoming an ops team looking after the upkeep and management of those tools. The day-to-day may get in the way of what others feel the product security team should be doing such as being an internal pentest team, security research, building custom training based on vulnerabilities found in the organisation’s products, and anything else people find cool.

There are many opinions on this, and I don’t think I’ve seen the right answer / the right balance for a product security team. Some don’t even call it product security and larger organisations split activities into separate teams altogether, where you’ve got a separate SCA team for instance. Anyway it’s something to bear in mind when building out a program and when you listen to conference talks about how people do things where they work.

Here’s a list of resources from day 2

Day 2

Shifting Security Everywhere (Tanya Janca)

It was good to hear consistency from all the speakers as Tanya stated she’s a fan of starting small when threat modelling. She also recommends using Adams four questions but also ensuring you have capacity. My key takeaway from Tanya’s talk is the security team needs to have good soft skills and be empathic. She opened with the example of where the security team pushes out a best practice like storing all their secrets in a secret management tool but when she went to talk to the teams about why they’re not following the published best practice the development teams explained they weren’t allowed to buy one.

There are a couple of tips she had for creating threat models

She also had some tips for product security in general

You can see from this list that soft skills and empathy are key to running a successful program, Tanya’s slides were great, and I encourage you to check out here talk when it goes up but the one below I really thought was very funny (also it highlights an attitude that simply doesn’t work anymore)

Having regular touch points with people not only helps the program but Tanya said it’s also a way to measure the success by using the following metrics

OWASP Serverless Top 10 (Tal Melamed)

The reason behind the OWASP Serverless Top 10 is not everything that applies to web vulnerabilities are found in the cloud. Serverless, as Tal describes in his talk, is something that’s not a monolith and not even a microservice but tiny chunks of code where teams work hyper-agile, code is shipped to prod once a day and everything is automated. When developing everything is done bottom-up where it’s the developers who decide to deploy to prod.

Serverless architecture

The talk focused on AWS but tried to stay as general as possible where the Tal explained the following is true when writing a serverless application

He focused on the following vulnerabilities in the top 10

Event Injection

What is it? - In a serverless environment events can come from anywhere and the impact depends on function permissions such as

Unlike web applications, there are other injections such as MQTT, email, pub/sub Event protection - put the control within the lambda function

From the attacker’s perspective, access is maintained only for a few hours as the environment will be shut down. They simply have to send another request and get access again.

How to protect against it?

Broken Authentication

What is it? - As functions are stateless it leaves serverless applications open to broken authentication attacks where multiple entry points trigger events and usually there’s no continuous flow of data. As an example, Tal used the example of an application that takes the user’s input which sends an email to a manager who has to send a reply to trigger the lambda function. An attacker could bypass the input by sending a malicious email directly to the SES to trigger the same lambda function and bypass authentication.

How to protect against it?

Sensitive Data Exposure

What to do

Over Privileged Functions

What is it? - For this vulnerability, Tal gave the example of a function that connects to a database. The database was configured to perform all * actions and to access all * resources. He said that 90% of these functions are misconfigured in this way.

Using the lambda code snippet below with a database configured like

def lambda_handler(event, context):
    ...
    response = table.put_item(
        Item={}
    ...
)

What can be done?

Set lease privileges such as those below

Use CodeSec which will scan serverless functions

Logging and Monitoring

What is it? Because you don’t own the infrastructure there’s no easy way of finding out something went wrong

What to do about it?

AI-Assisted Coding: The Future of Software Development; between Challenges and Benefits (Dr Magda Chelly)

Magda’s talk was thought-provoking where in her view the world is not ready for the risk posed by AI. This is not the terminator type risk but rather AI is demonstrating that it can be manipulative and gave an example where Microsoft’s AI told someone they were in love with them and that their wife didn’t love them.

She spoke about the emergence of CoPilot and ChatGPT and spoke about from a legal perspective things are going to get messy fast. If the code is AI generated who owns the intellectual property? If it is assisted its a grey area but if generated solely by the AI then it belongs to the creator of the AI. If someone on your team writes code with AI and embeds it in your code. Would you know and how do you deal with the fallout?

She went into the attractiveness of a business of AI vs a human who is developing software bearing in mind that quality needs to be a factor and not just speed. I have to say this was uncomfortable. She highlighted the following

However, there’s a big problem with AI and that’s quality! It may, and examples have shown it does introduce software vulnerabilities. Not only security risks but also ethical issues and biases. There are also attacks against the model itself like messing with the inputs, data poisoning the training data, garbage in garbage out. She highlighted that in Asian countries security testing is not as mature as in Europe and now that anyone can write code freelancers who sell their services to write code can now just use an AI and sell on what the AI wrote. Yes there are regulations in place but places like Tunisia sells coding services in Europe, will this be AI-generated without Europeans knowing?

She closed out by saying AI was here to stay and it’ll transform software development where the real value in her the opinion is through testing and automation (she takes a shot at the testers here saying it’s a boring job and let’s leave that up to the AI to do)

Developer Driven Security in high-growth environments (Jakub Kaluzny)

Jakub gave a talk which was based on the tech stack used in his company. The idea and solution they built was interesting but it would have been great if it were open-sourced, something Jakub alluded to that he would be happy about and wanted. The program they build did the following

• Each pull request has an associated threat model • It takes less than 5 minutes to complete a pull request • Threat modelling is done by the developers • Threats and mitigations are stored in a database • Exact line references for security mechanisms can be queried • Product Security is directly involved only when requested by the development teams (10% - 20%)

As the program grew they added new things such as

Performing Threat Modelling

In closing Jakub recommended 3 things every team should do to grow their program

Get On With The Program: Threat Modelling In and For Your Organization (Izar Tarandach)

Izar is the leader of the OWASP pytm project and in this talk, he didn’t go into the how but rather how do you get this to work at scale because with different teams you’ve got different cultures.

He makes the point that threat modelling is just one more thing we’re asking developers to do and when he first implemented it his team wasn’t helping them. They pushed the “Threat model every story” which wasn’t received well so they had to adjust for developers’ language.

Scaling up involved the following

The challenges included

Izar made the point that developers are smart people and smart people don’t like to be told what to do. Also, managers want coders to code! These are things you need to keep in mind. You also need to face the fact that your security team will never be the right size so everything must be automated, as code, no code, etc.

People also need to understand the fact that threat modelling is a conceptual process and in his opinion, the human will never really be out of the loop. He highlights the need to measure the program and that OWASP SAMM could be used because to learn lessons you need to know how to measure things.

Bad metrics to use (you may need to go back and accept the risk so numbers can be skewed)

Good metrics to use

Train everybody on what? The four questions

  1. Be able to model the target system
  2. Understand, explain, and imagine what could possibly go wrong
  3. Decide what to do about it - fix those things in the context of what they’re building
  4. Recognise how successful the effort was

Tips for the execution part

  1. Have to understand what you are threat modelling - you won’t have to scope everything
  2. Figure out how much of the scope should be done - what % of the total. How much of the attack surface is covered
  3. Get the responsible parties to threat model
  4. Per your methodology validate the results
  5. Get everything in one place - it’s not clear what to do with the results
  6. Get people to act on the results

Scale

He says this one is going to hurt! You’ll notice the difference between teams shows and may not work for all teams

Mitigate

Work on medium and long-term road maps

Importance of templates

Closing

That’s it for my time at OWASP Global AppSec Dublin which overall had some really good talks and I encourage people to check them out when they get posted. If you went and didn’t feel motivated to try your hand a threat modelling I don’t think we were at the same conference. I’d also encourage people to go and sign https://github.com/owasp-change/owasp-change.github.io as I feel this is an important change that needs to happen to keep OWASP relevant (much more than a simple and questionable rebrand).