Your argument is invalid! Cause I'll give you a hard time if you insist on it.

Today I read a blogpost by Fefe in which he rants about how folks just give up on trying to develop more secure code or even fix all bugs in their software but instead draw resources from bugfixing teams in benefit of building mitigations like sandboxing technologies.
 

Fefe criticizes Adobe's security chief Arkin for saying the following sentence:

“My goal isn’t to find and fix every security bug, I’d like to drive up the cost of writing exploits. But when researchers go public with techniques and tools to defeat mitigations, they lower that cost.”
 

I think Fefe is absolutely right in criticizing this but it nevertheless got me thinking.

It reminded me of an episode of the german "Chaosradio" podcast.
The title was (translated) "Responsible Disclosure - How to be a hacker and survive". 


We in the infosec community often think like "Wow, I've found that security bug, you totally have to fix it".

But there are certainly powerful organizations who you'd better not mess with, like big pharmaceutical or biotech companies, intelligence services, federal police, organized crime organizations and others.

Their security often relies much less on "fixing bugs" than on the fact that they can simply crush you like a "bug" (haha the irony) and just basically destroy your life by either suing the heck out of you - or worse.
I'll leave the latter to your imagination. 


So the cost of exploitation could be really high.

It at least prevents most everyday folks from doing stupid things, and when it doesn't they'd have a very hard time and the word will spread that you 'd better not imitate them.

Let's face the facts:

The physical world out there is full of  dangers and "bugs", some of which can be fixed and some cannot or can't easily be fixed or not even mitigated. Or it is just very expensive to fix or mitigate it so you decide to just live with it.

So how can we deal with that? With rules and laws. If you have a closer look, most stuff is regulated by social rules and laws. Others are mitigated somehow technically but never really fixed.

Just think about it for a second before you raise a shitstorm against me.

You can build extremely robust cars that protect you in case of a crash.
But unless F1 monocoques get really cheap, most cars won't have one.
Even if they had it'd be still reasonable to have speed limits.
This doesn't mean you can't speed and die in a crash.


In most countries, you are not allowed to have a gun without a special permission. 
This doesn't mean you can't get hold of a gun and shoot people.

In Japan it is illegal to carry lockpicks, I've read in a Tweet. 
This doesn't mean you can't do it anyway and pick some locks if you know how to do it.


Many professions are also heavily regulated. 
You have to go through a formal education process and get a license to pursue certain careers, like working as a private investigator or bounty hunter (or so I hope, at least).


Bruce Schneier recently said on the Pauldotcom podcast how laws are very important if not *the* most important thing for security because you just cannot prevent or mitigate all security risks technically.

Traditionally, dangerous knowledge was often kept secret and only passed on within a very close circle of people.

With the internet, this time is certainly over. 
 

But does that necessarily mean that Arkin's argument is totally invalid? 
 

I would like to say yes, because I want to live in a world where security bugs are rather fixed than mitigated.

I would like to say yes, because bugs in software actually *can* be fixed, and because I still have hope that we *can* develop methods and technologies that enable us to produce more secure code. So information security is not equal to what we traditionally know from the physical world.

I want to live in a world where security researchers can present their findings on conferences so that we all can learn about security threats and bugs and build our own mitigations. 
 

I don't want to live in a world that takes back what we've achieved in the last few decades - the fact that everybody can share their knowledge or learn new things via the internet.
 

I don't want to live in a world where researchers are threatened so they don't disclose their findings or where you can only work in information security if you get a federal license and sign a letter of conduct that binds you by law to not disclose certain types of information and may not share your knowledge with others who haven't signed the same letter of conduct.


On the other hand, information technology nowadays impacts the physical world and so security bugs also impact the security of real people in the real world, so the concept of "security by social rules, laws and intimidation" could apply somehow.
The only difference is, that those rules and laws would have to apply globally and not only within a certain country in order to work.

So I am wondering if my above wishes are still realistic. What do you think?

Views: 200

Comment

You need to be a member of Dissecting The Hack to add comments!

Join Dissecting The Hack

Latest Activity

Anton Vyacheslav is now a member of Dissecting The Hack
Sunday
bernardorichard updated their profile
Nov 28
Sam Mccalla is now a member of Dissecting The Hack
Nov 19
bernardorichard is now a member of Dissecting The Hack
Oct 24

© 2018   Created by Marcus J. Carey.   Powered by

Badges  |  Report an Issue  |  Terms of Service