My shopping cart
Your cart is currently empty.Continue Shopping
Here is the final part of the transcript from Adam Shostack's and Mark Vinkovits' session at the AppSec California 2019 conference. This part of the session explains Mark's thought process behind the creation of the Privacy suit in the Elevation of Privilege Card game. Mark explains why he was motivated to create this and how we went about doing it.
If you've missed the first 2 transcripts of the Adam and Mark's session, you can find them here:
[MARK] So where did this whole idea come from specifically? Like most other companies, we were working on our GDPR prep work which I assume everybody here is already done as they close the project. Towards the end of the deadline we had a conversation about a specific article in GDPR about data protection by design and default and no one had any clue what that meant and we still don't really know. But it's an article in GDPR - you have to have some kind of documentation and evidence around it that you're doing it. So both legal and compliance created a policy which required all the developers to think about privacy [of what] they are developing.
Now the reality is that design decisions are usually made by either one developer when he's working on a story or best case he is asking a couple of colleagues to grab a coffee and discuss this design decision he wants to do. So that's not the context where people start looking at specific items in a policy.
[ADAM] Underlying this I believe pretty deeply that all of the developers I've ever worked with want to ship quality code and that quality now includes security and privacy. They understand that they see the headlines. they're like yes this matters but I'm not sure exactly what you want me to do. Please help me understand how I can do this in my sprints, in my process and do so in a way that respects my time in my engagement.
And so for me there's huge value in making sure that the security and privacy requests that we make of people are accessible, understandable and to the extent that we can make them fun, it's an all-around win.
[MARK] As Adam said, this whole privacy concept somehow had to get into the context where the developers are and what I was thinking of was that we already have this quite well established process around threat modeling. We already have an area where developers knew that okay we're going to talk about threats, we are going to go through this with every feature so it wasn't unexpected for them. That's why the idea came: how could we put privacy into the threat modeling exercise? How could we leverage the same situation we are in to get the engineers inside into that topic. And here comes where it's relevant that I used to work a lot in user centered design.
I learned there the hard way that whatever system or project you're working on, it's going to be very good quality if you cooperate with a couple of people. It's going to be awesome quality if you involve the end users of your solution very early. So there are a lot of methodologies and a lot of names for what this is - user centered design, human-computer interaction, user experience, iterative design, etc. I think the most important part of this is that involve your end users and involve them often, regularly.
So calling out the four items that I think are important is:
The picture here is not necessarily saying that everyone has to go to that extreme but it's a good example. We had a project where we wanted to equip firefighters and help them in extreme situations. We had a bunch of very good ideas. The mobile development team had an idea as to how to create apps that they are going to use, to look in the snow or smoke and see their environment. But because it was quite established in the team to work with the end users, they said first of all we are going to go into a firefighting exercise and see how that works.
The guys you see on the picture are not firefighters. They are IT researchers who went to a firefighting exercise, put on all the equipment and actually followed the firefighters following their process to understand how this environment looked like. Quite quickly they realized that all of the ideas they had, you can just throw them out - like you're not going to put any additional equipment on a firefighter. He's already carrying around the 20 kilos (Adam - 45 pounds).
He has gloves on, he doesn't see [anything]. So what kind of IT equipment are you going to give him? An interesting result all of this whole exercise was and I'm interested whether you come to the same conclusion.
So what do you think the firefighters themselves said was the most important tool when they are going out to an emergency situation? [Audience responds]
Nope not the radio. Oxygen mask could be a possibility but no. They actually said a badge. They usually have a couple of badges with them like wooden badges because it's all around tool - they can break things open, they can use it as a hammer, they can put it under things that they want to hold in place. So although it seems quite counter intuitive given that a large wooden badge has quite some weight but they carry them around when they go into situations.
So motivated with all this, I wanted to integrate privacy into stress modeling. How do I understand my users, how do I understand that environment they're going to work in.
Fortunately as I said we were towards the end of our GDPR preparation so we had a bunch of JIRA items which specifically called out engineering tasks which needed to be done for GDPR compliance. In the threat modeling session, we had the engineers in there so those JIRA items were something they could understand. Additionally we had around 15 products which we actively developed so it wasn't too narrow of a set of requirements that I was able to look at but like a huge portfolio depending on where the specific product started, which culture, what kind of target audience it initially had so there were a bunch of interesting items in there. Items which included removing inactive profiles or anonymizing IP addresses or implementing the cleanup job which removes personal background images. So based on these two items I was basically able to compile a list of topics which should be discussed during a privacy chat session.
The next question was okay so I'm going to move privacy into the threat modeling session but how is that going to look like - Do we do the elevation of privilege card game and then switch to something else? Are we going to look at the diagram and start pointing out things or go to the checklist, etc.
It made sense to integrate privacy into the card game itself. So how does that look like integrated into the card game. First of all we needed some cute icons like the one with the monsters. That's also a fun experience but that's not the important part of it.
But the hint texts which are on the cards are very good for the developers because looking into this flow model they're not as detailed to actually turn them into a checklist and get developers bored and think that they're just ticking boxes. They are also not so abstract or so security specific that they can't relate to it. So the idea here was that in order to create texts or hints which are key for developers, I created like small scenarios for all the topics that I identified.
For example one of the scenarios was that someone uses a restaurant recommendation app which based on previous restaurants visits compares it with the restaurant persona of other users and identifies further restaurants which are worth visiting.
The privacy concern will be that additional to the restaurant persona, there's also financial persona being created which is then used to target advertising of jewelry or something like that. And the question was like okay how would an engineer describe this scenario, what text is understandable for him that relates to this kind of situation.
[ADAM] The thing that I wanted to mention is we actually spent a huge amount of time working through and doing usability tests on the hints on the cards. We prototyped half a dozen different approaches and then watched how people engaged with them and so it was exciting to me to hear that you [Mark] actually did something similar as you were doing this work.
I want to go back to something you said which is when you were talking about doing this work of really observing, you said go to that you don't have to go to this extreme and there I want to disagree a little bit. The genesis of this work was really engagement with the human beings we were asking to do threat modeling, really watching as they struggled with the tool and said this is tedious, I'm not finding the right things, how do I make it better and sitting and saying, okay this is what really happens and I think there are few substitutes. I've never put on firefighter gear as part of this. I go into different meeting rooms and I'm like I just want to listen and see what's happening. Please let me Wallflower a little bit or I sit behind a glass wall and it never fails to surprise me how much I learn when I simply sit and observe and take notes about what's happening.
[MARK] So finally we are going to add privacy to the elevation of privilege card game but as I said compiled quite a large list. Are we going to work with all of these, how is that going to fit into the time and also as mentioned most threat modeling sessions end because you run out of time and you run out of time because although it's a known and accepted process you still have to convince a product manager that I'm going to pick up all of your development team and go into your room and have them do threat modeling instead of delivering them your feature. So I had to find a balance where we could extend that session without actually getting into an argument. We understand security is important but could we like avoid the privacy part.
So the magic number I came up with was, we could add like 20% to the session. We could still argue that that's enough but then how are we going to get that huge list down to just meeting 30% of the time and basically the statement behind that was let's have one suit dedicated to privacy which means 11 items which we can have in there. So which 11 items.
I did two rounds of filtering. First round was to get down to the 11 using all the people working with GDPR within the company. Basically having each of them provide ranking which they think is important. Then I sent the 11 items to members of the International Association of privacy professionals and had them do the same exercise of ranking these eleven items. So I got a final list of the most important items, for example what should be King, what should be the 2, etc.
Fortunately at that time I was reading a book which talked a lot about how humans are really bad at ranking things. You give them a list of items in the morning and in the afternoon you'll get different results. Sometimes you even have duplicates in the list and people don't notice that they have put one item in the top and the other in the bottom.
So what I did to rationalize this a little bit was to look at the actual finds which were given by different data protection authorities and fortunately like the French and English are quite trigger-happy, there was plenty of reference I could work from and they were some very interesting findings. For example privacy professionals thought that you have like a sub processor who you send data to and then at the end of the contract you ask them to delete those items. By now we know that usually that's how it works but there have also been previous finds with situations where someone actually gave data to a third party, asked them later to delete it and that data just stayed there. This was one example where basically I had to reorder the cards.
[ADAM] To explain how Mark actually added privacy to to the game it's important to understand a little bit about STRIDE and STRIDE is a pnemonic. Actually this year will be the 20th anniversary of the creation of STRIDE as a way to help people think about what goes wrong. The pnemonic stands for Spoofing, Tampering, Repudiation, Information disclosures, Denial of service and Elevation privilege (applause) thank you.
Usually I say it more slowly but these are six types of things that typically go wrong with systems and so if you're just thinking about what might go wrong, it's a helpful starting point. We use them as the suits in the card game and then Mark extended that to be STRIPED so hopefully in 20 years ago we are still going to talk about this. Let's be ambitious!
[MARK] I think it's also very important to mention that we already have STRIDE. We had to make sure that the items in the privacy extension were not just manifestations of those. Specifically looking at the finds which we have been looking at, I tried to translate those into STRIDE threads but very specifically your third party not deleting your data doesn't really translate them to STRIPE. So I think having the privacy part has value and I really encourage everyone to think about this when they are threat modeling.
To help you with that - this is where the game part comes - we are going to present three out of these cards (if you want to know more about the individual cards you can look at the downloads). It's also creative commons.
Adam and Mark play a short game with the audience using the following cards from the Privacy suit.
[ADAM] So after Mark reached out and contacted me and we had submitted this talk we heard from some folks at f-secure who had done similar work and so we wanted to give a shout out to them and make you aware of what they had done. They've also created a variant. Creative-commons license-ish.
To quote from their site, you can play with or without the original elevation of privilege so a little bit of a different design and they created a four letter mnemonic called TRIM which is:
[MARK] To sum up you have basically three options if you haven't tried the elevation of privilege card game before or if you are just looking at security threats then you can go on and get the original deck or if you want to add a little bit of privacy spice to your threat modeling then go with the privacy extension of STRIPED or if you want to go full in privacy for a session you can use the elevation of privacy game.
[ADAM] So we want to encourage you to go out and try this, try threat modeling, try out modeling with privacy. There are free versions that you can print yourself - either at Github, LogMeIn or F-secure or there's a company in the UK Agile Stationery who are making them available for sale and you can get one that's printed up nicely with rounded corners in a box and so it looks a bit nicer and someone said to me the other day you know I saw that it looks a little expensive I think it was 20 pounds and did in a sense it is a little bit more expensive than a standard deck of cards which they print by the millions but if you're in a room full of developers then this is by far the cheapest thing in the room and so you can get a copy there. Thank you Mark for creating and driving this. We're happy to take any questions you have.
Watch the video for this transcript:
We are pleased to offer Mark's game for sale here