This webinar remains one of our favourite explanations of the gamification of cybersecurity in Agile development contexts.
Grant Ongers was joined by an array of experts (Adam Shostack, Stuart Gunter, Tash Norris & Nina Alli) on Cyber Security to discuss the gamification of cyber security in Agile development contexts at an event organised by Equal Experts.
Grant explained that the traditional waterfall approach provides a long-duration window at the beginning of a project to design security into systems, with remediation of unexpected flaws often delayed until the system is in production. This long window allows time to meticulous pre-planning and design review by specialists. In contrast, agile methods provide multiple short windows for design input in addition to continuous feedback and measurement opportunities.
Grant explained that a traditional review of a system design would not be appropriate in an agile context as design documents are likely to be incomplete and the process might take too long. Cybersecurity professionals who don't adapt their methods are doomed to become a deployment bottleneck, causing tension and conflict, or the omission of security work.
Gamification was proposed as way to scale and realign the threat modeling function by allowing developers with up to date system knowledge to get involved in frequent threat modeling sessions.
Grant used a portion of this video to explain the game mechanics, before explaining the value of the game:
The OWASP Cornucopia game, a variation of Elevation of Privilege, was proposed as a gaming deck that is written in language accessible to developers. Adam Shostack, the inventor of EoP, was also on hand to point out that even smaller threat modeling exercises are possible based on a "4 question" strategy, without using a game.
Developers should expect to perform threat modeling during the "grooming" activity, before a Scrum sprint and before stories are pulled into implementation in a kanban context.
Resources
We were grateful that Grant's talk mentioned our deck and the Croupier tool for dealing random hands. We (obviously) agree that retaining a random element to the card dealing is a really important part of the gameplay. The random element is regrettably absent from other models of remote Elevation of Privilege facilitation.
Q&A Highlights
We challenged the panel with a few questions:
How much time is spent on the "convincing" phase of a game round?
This is the phase of play where a threat is discussed and shown to be viable in the system under review. The panel explained that much of the value of the game was to start focused conversations about technical problems that would not otherwise occur. If developers could not agree on whether a system was vulnerable, or a threat had been mitigated, then getting to the bottom of that is useful work. If work is needed to rediscover or to measure the state of the system then this work could be planned in before knowledge gaps become a problem. This ensures the team is up to date, makes effort more visible and helps identify threats.
Some players like to cheat. Do you encourage cheating?
This question arose from an offline conversation with Grant Ongers. Grant passionately favours allowing teams to cheat in the gaming aspects of the exercise. Our decks are less-suited to cheating as you can spot the suits in another player's hand. Two cheating options were discussed:
- Failing to follow suit, means developers can avoid looking at a less serious threat if, in the process, they loose a round. In this scenario the the player keeps a high-value card for another round as a more serious threat has already been discussed. e.g. they retain a queen when a king is already played. Personally, I don't like the idea of developers avoiding a prompt to think about a threat, even if it makes the game more engaging and habitual, but perhaps I am too serious.
- Going off topic. This scenario involves playing a high-value card that is likely to win a round, but then attempting to win a second point by describing a threat that matches the system. This dishonest element is that the threat described does not match the card played. Since this incentivises discussion of a threat, we agree it is a useful bit of game-play, but it does once again allow another threat to be overlooked.
Is the game sufficiently comprehensive for medical device security? Is there anything you would add?
Nina Alli, the medical device expert on the panel, suggested that there was possibly a suit of threats around human factors, including factors arising from or involving patients themselves. Adam Shostack joked that Nina would be dragged into producing a medical devices version of the game, which actually everyone quite liked the sound of! Watch this space! Nina made some further remarks about the fact that in a hospital there is no safe sandbox environment where threat models can be experimented with. The hospital itself, with the humans inside, was effectively the sandbox environment for certain kinds of threat. A pseudonymous contributor asked:
Sometimes the ratio of staffing levels between Application Security professionals and developers is so big that perhaps automation is still useful?
The panel seemed to be quite sceptical about the role of automation, preferring scrutiny by creative human minds. Grant suggested, as we have done in the past, that if the ratio is too big then in fact gamification offers a lightweight and compelling means to draw additional minds into the threat modeling effort and compensate for staffing imbalances.
Overall this was an excellent session from Grant Ongers and the panel, as well as a slick presentation from Equal Experts.
Full event video: