Threat Models

From DevSummit
Jump to: navigation, search

This process emerged from security practices training / support for clients

We want to focus on internal capacity building — this is often lacking when hard core security people get together and create a threat model — they may come up with a good model that doesn’t boost organizational capacity. We want to empower people to think about their own security.

We want to also build clarity for people. Example: “We’re scared of NSA” This may not be legitimate fear — provides an opportunity to re-focus on threats that they should be concerned about and do things in a way that feel safe. This piloting has happened in the reproductive justice movement — focused on the safety of women’s bodies in its work. So for them safety of bodies was understood — so how do we generalize this where body safety not part of current practice

usually takes a half day to dive deep and have people think through. So this is a demo version We are not going to generate a useful threat model. Don’t focus on content — refine the process

Idea: we might be able to use the threat models already identified by a prior session

Our demonstration is lightweight and a flexible approach. Our goal is not conventional full threat model — more about advancing understanding

Tactical tech holistic security guide — image of perception window (things we don’t see that are real, things that we see that are not real — goal to open the window) Process… not an end point Human-centred = no logical or perfect workflows. It is not a hard process. Not a great process to secure your infrastructure — more formal would be good there, but different groups or families might appreciate this. Coupled this first with workshop to introduce to military-industrial language and terms. Likely we need to think about how to better language and voice these things — challenge re inheriting the mindset (information, assets, adversaries, threats, etc.)

Assets: anything you are trying to protect from bad outcomes (information assets, comms channels where data is moved from one place to another, people’s bodies, mental health, positioning/comfort/reputation in society, [social capital])

Imaginary threat model today is NPDev :)

Harvesting

Open brainstorm seeding the conversation = post its and pens. Three rounds on the info systems or repositories or channels that you use.

  • Twitter feed (hashtag feed)
  • wiki traffic
  • phone contacts and call contents
  • login for wiki
  • attendance lists/details
  • conversation contents
  • participant list
  • bodies
  • wifi bandwidth, etc.

No refinement in first pass

Second pass: start grouping — affinity diagramming, reduce into smaller chunks

  • Alternative: Distribute questions beforehand and have folks come with post its so that you can focus on groupings and gaps.
   “What is the information or assets?”
Q - Are you also thinking about job or financial security?
A - Not sure where that fits in… they are things to more preserve than protect. We protect them by securing tangible assets, not conceptual. Those are more outcomes we want to protect or mitigate against. When you collect these from folks you’ll see the same thing twice phrased differently. Duplicates.
Q - What about the buildings?
A - We focus on things we can mitigate against. Outside our control.

During brainstorming — walk around and re-direct where needed. Bring folks out of rabbit hole to concrete and meaningfully in their control. Though you may post those so that you can discuss why it would not be included.

Potential Adversaries/Threat Actor

  • Who are we actually talking about?
  • Who might be interested in these assets?
  • Who do we trust with them?
  • Get into the actionable perception window

Get them to dump everything — then the whittling down is a teachable moment

Brainstorm examples: An attendee, the public (we are in public space… everyone not part of NPDev), infiltrator, Aspiration haters, curious person who records a talk and not part of our shared agreement or photographer who does not understand what red lanyards mean, targeted thief, opportunistic thief, random lost person, Preservation Park network operations and security (view themselves as a police force), Oakland Police Department, people interested in our assets, people who want to access our assets, litigators.

Advanced persistent threats go into their own territory: FBI, NSA, Google — be clear that this is not a threat model we’re going to mitigate in this type of process. If we are trying to defend against these this is not the process that is going to get us there. These are known threats — may not be meaningfully defending against them. Keep them in the room with us… but this is super serious stuff. The level you need to mitigate against this is a different kind of activism and movement building. [I may have some of this wrong…]

Then in next pass we organize these stickies.

Threats

  • Outcomes that we want to avoid.
  • What is going to happen if a person does a bad thing
  • Social graph revealed / social network
  • Looks of thrust in systems
  • Physical harassment or attaches
  • Conversations used to shame or attack
  • Digital harassment

All of this at some point needs to land in a technologist’s lap or someone who knows how to approach it.

Three passes of brainstorming on 3 different topics Then do affinity diagramming — Look to refine the groups and collect likes with likes (this powerful because a lot of learning happens at this step). narrow sets down to get to something that we can get our heads around.

Start by grouping within the three different categories. Get folks to grind on this in small groups. Facilitators walk around and help out. Get down to a smaller set of items that we can get our mind around [and do something about].

May see that protecting against one protects against another. E.g. public + thieves + random lost person can be all solved together. Infiltrator and attendee likely look the same. Preservation part management and network operations and PP security likely go together. Volunteers look just like employees in terms of access. Who profiles the same. Here is where you mine a group’s wisdom. Litigator looks like Google because can reach into our system (be prepared for discovery — risk associated with cloud services).

Take a good amount of time on this. Report back and get peer feedback.

Pass 2 is affinity diagramming.

Last pass, which is most important, is to come together and ask everyone to take 3 groups — one from each — and choose a threat scenario that is plausible or that they’re concerned about. Invitation to collect a triad

OUTCOME you are trying to avoid

  • ACTOR who may be able to do that
  • TREAT that you want to mitigate against

New board for threat scenarios.

e.g., Public > conversation contents > physical harassment and attacks

Mining specific scenarios from the group.

Then is to dot vote on most important scenarios to surface which is most realistic and plausible.

Everyone does this individually — you may need to duplicate post its. Okay to have some similar and discuss similarities and differences to tease out important nuance and talk through concerns. Conversational, discursive process intended to build the group’s knowledge about security.

Ask people to explain why they have selected this threat scenario — good learning moment. Sharing lived security experience with peers.

Then we discuss how to mitigate those. We [experts] take that back and reality check it. So we may tell them that their biggest concern really isn’t. Focus on laptop theft before massive opsec threats.

We’ve developed best practices and go through those to map against key threats that emerged iEcology on checklists.

If you use this please let us know and tell us how you went and what you learned. We are almost ready to put this in a script form.