NotesWhat is notes.io?

Notes brand slogan

Notes - notes.io

A Brand New Mindcraft Moment?
Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Link]

1. this WP article was the 5th in a series of articles following the security of the web from its beginnings to related subjects of in the present day. discussing the safety of linux (or lack thereof) matches nicely in there. it was additionally a effectively-researched article with over two months of analysis and interviews, one thing you can't quite declare your self on your recent pieces on the subject. you don't like the information? then say so. or even higher, do something constructive about them like Kees and others have been trying. however silly comparisons to outdated crap like the Mindcraft studies and fueling conspiracies do not exactly help your case.
2. "We do an affordable job of discovering and fixing bugs."
let's begin here. is that this assertion based on wishful considering or cold hard info you're going to share in your response? in line with Kees, the lifetime of safety bugs is measured in years. that's greater than the lifetime of many devices individuals purchase and use and ditch in that period.
3. "Problems, whether they are security-related or not, are patched rapidly,"
some are, some aren't: let's not overlook the recent NMI fixes that took over 2 months to trickle down to stable kernels and we also have a person who has been ready for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-programs.btrfs/49500 (FYI, the overflow plugin is the primary one Kees is making an attempt to upstream, think about the shitstorm if bugreports shall be treated with this perspective, let's hope btrfs guys are an exception, not the rule). anyway, two examples will not be statistics, so as soon as once more, do you might have numbers or is it all wishful pondering? (it is partly a trick query because you may also have to explain how something gets to be determined to be safety related which as we all know is a messy enterprise within the linux world)
4. "and the stable-update mechanism makes those patches obtainable to kernel customers."
besides when it doesn't. and sure, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree.
5. "In particular, the few developers who are working in this space have by no means made a critical attempt to get that work integrated upstream."
you do not have to be shy about naming us, in spite of everything you did so elsewhere already. and we also defined the the reason why we haven't pursued upstreaming our code: https://lwn.internet/Articles/538600/ . since i do not expect you and your readers to learn any of it, here's the tl;dr: if you need us to spend hundreds of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that's how the world works, that is how >90% of linux code gets in too. i personally find it fairly hypocritic that well paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter without cost. and before somebody brings up the CII, go verify their mail archives, after some preliminary exploratory discussions i explicitly requested them about supporting this long drawn out upstreaming work and received no answers.

Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]

Cash (aha) quote :
> I suggest you spend none of your free time on this. Zero. I suggest you get paid to do this. And properly.
No person count on you to serve your code on a silver platter without spending a dime. The Linux foundation and huge firms utilizing Linux (Google, Purple Hat, Oracle, Samsung, and many others.) ought to pay safety specialists like you to upstream your patchs.

Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Hyperlink]

I might simply like to level out that the best way you phrased this makes your comment a tone argument[1][2]; you've (in all probability unintentionally) dismissed the entire dad or mum's arguments by pointing at its presentation. The tone of PAXTeam's remark displays the frustration constructed up over time with the best way things work which I feel must be taken at face value, empathized with, and understood slightly than simply dismissed.
1. http://rationalwiki.org/wiki/Tone_argument
2. http://geekfeminism.wikia.com/wiki/Tone_argument
Cheers,

Posted Nov 7, 2015 0:Fifty five UTC (Sat) by josh (subscriber, #17465) [Hyperlink]

Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]

why, is upstream recognized for its fundamental civility and decency? have you ever even read the WP submit beneath discussion, by no means thoughts past lkml traffic?

Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Link]

Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (visitor, #58961) [Hyperlink]

No Argument

Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]

Please don't; it doesn't belong there both, and it especially would not need a cheering section because the tech press (LWN usually excepted) tends to supply.

Posted Nov 8, 2015 8:36 UTC (Solar) by gmatht (guest, #58961) [Hyperlink]

Okay, but I used to be pondering of Linus Torvalds

Posted Nov 8, 2015 16:Eleven UTC (Sun) by pbonzini (subscriber, #60935) [Hyperlink]

Posted Nov 6, 2015 22:43 UTC (Fri) by PaXTeam (visitor, #24616) [Hyperlink]

Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]

Why should you assume only money will fix this downside? Sure, I agree extra assets should be spent on fixing Linux kernel security points, however don't assume somebody giving a company (ahem, PAXTeam) money is the one solution. (Not mean to impugn PAXTeam's safety efforts.)

The Linux growth neighborhood might have had the wool pulled over its collective eyes with respect to safety issues (both actual or perceived), but merely throwing cash at the problem will not repair this.

And sure, I do understand the business Linux distros do tons (most?) of Hexnet , and that implies indirect monetary transactions, but it is much more concerned than just that.

Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]

Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link]

Posted Nov 7, 2015 9:49 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]

Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]

I think you undoubtedly agree with the gist of Jon's argument... not sufficient focus has been given to security in the Linux kernel... the article gets that half proper... cash hasn't been going in the direction of safety... and now it needs to. Aren't you glad?

Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]

they talked to spender, not me personally, however yes, this side of the coin is properly represented by us and others who were interviewed. the identical way Linus is a good consultant of, effectively, his personal pet challenge known as linux.
> And if Jon had only talked to you, his would have been too.
given that i'm the creator of PaX (part of grsec) sure, talking to me about grsec issues makes it among the finest methods to research it. but when you recognize of another person, be my visitor and title them, i'm fairly certain the recently formed kernel self-safety people would be dying to engage them (or not, i do not suppose there's a sucker on the market with 1000's of hours of free time on their hand).
> [...]it additionally contained fairly just a few of groan-worthy statements.
nothing is perfect but contemplating the audience of the WP, this is one in every of the better journalistic pieces on the subject, regardless of the way you and others do not like the sorry state of linux security exposed in there. if you'd like to debate extra technical details, nothing stops you from talking to us ;).
talking of your complaints about journalistic qualities, since a earlier LWN article saw it match to incorporate several typical dismissive claims by Linus about the standard of unspecified grsec options with no proof of what expertise he had with the code and how latest it was, how come we didn't see you or anybody else complaining about the quality of that article?
> Aren't you glad?
no, or not but anyway. i've heard a lot of empty phrases through the years and nothing ever manifested or worse, all the money has gone to the pointless train of fixing individual bugs and related circus (that Linus rightfully despises FWIW).

Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Hyperlink]

Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Hyperlink]

Right now we have received developers from massive names saying that doing all that the Linux ecosystem does *safely* is an itch that they have. Sadly, the surrounding cultural perspective of developers is to hit purposeful objectives, and often performance targets. Safety targets are sometimes missed. Ideally, the culture would shift so that we make it tough to observe insecure habits, patterns or paradigms -- that could be a task that will take a sustained effort, not merely the upstreaming of patches.
Whatever the tradition, these patches will go upstream finally anyway because the ideas that they embody at the moment are timely. I can see a way to make it occur: Linus will settle for them when an enormous finish-consumer (say, Intel, Google, Facebook or Amazon) delivers stuff with notes like 'here is a set of enhancements, we're already utilizing them to resolve this type of downside, here is how all the pieces will remain working as a result of $evidence, be aware carefully that you're staring down the barrels of a fork because your tree is now evolutionarily disadvantaged'. It's a game and will be gamed; I would choose that the neighborhood shepherds customers to comply with the pattern of declaring downside + resolution + useful take a look at evidence + performance check proof + safety take a look at proof.
K3n.

Posted Nov 9, 2015 6:Forty nine UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]

And about that fork barrel: I would argue it is the opposite way round. Google forked and lost already.

Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (visitor, #99377) [Link]

Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]

Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Hyperlink]

So I must confess to a certain quantity of confusion. I may swear that the article I wrote said exactly that, but you've got put a fair amount of effort into flaming it...?

Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (guest, #24616) [Link]

Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Link]

I personally suppose you and Nick Krause share reverse sides of the identical coin. Programming potential and primary civility.

Posted Nov 6, 2015 22:Fifty nine UTC (Fri) by dowdle (subscriber, #659) [Link]

Posted Nov 7, 2015 0:Sixteen UTC (Sat) by rahvin (visitor, #16953) [Link]

I hope I am wrong, however a hostile angle is not going to assist anybody get paid. It's a time like this where one thing you seem to be an "expert" at and there's a demand for that expertise the place you show cooperation and willingness to take part as a result of it's an opportunity. I am comparatively shocked that somebody doesn't get that, however I am older and have seen a number of of those alternatives in my profession and exploited the hell out of them. You only get just a few of those in the common career, and handful at probably the most.
Typically you must invest in proving your skills, and that is one of those moments. It seems the Kernel community might finally take this safety lesson to coronary heart and embrace it, as said within the article as a "mindcraft moment". This is an opportunity for builders that may wish to work on Linux security. Some will exploit the chance and others will thumb their noses at it. Ultimately these builders that exploit the opportunity will prosper from it.
I feel previous even having to write down that.

Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Link]

Maybe there's a chicken and egg drawback right here, but when searching for out and funding folks to get code upstream, it helps to pick folks and teams with a history of having the ability to get code upstream.
It's completely affordable to favor working out of tree, offering the ability to develop spectacular and significant safety advances unconstrained by upstream necessities. That's work somebody might also wish to fund, if that meets their needs.

Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]

Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]

You make this argument (implying you do research and Josh doesn't) after which fail to assist it by any cite. It would be rather more convincing if you happen to hand over on the Onus probandi rhetorical fallacy and actually cite information.
> working example, it was *them* who steered that they wouldn't fund out-of-tree work however would consider funding upstreaming work, besides when pressed for the main points, all i obtained was silence.
For those following along at house, this is the related set of threads:
http://lists.coreinfrastructure.org/pipermail/cii-talk about...
A quick precis is that they told you your undertaking was unhealthy as a result of the code was by no means going upstream. You instructed them it was due to kernel developers perspective so they should fund you anyway. They informed you to submit a grant proposal, you whined extra concerning the kernel attitudes and eventually even your apologist instructed you that submitting a proposal is perhaps the neatest thing to do. At that point you went silent, not vice versa as you suggest above.
> obviously i will not spend time to put in writing up a begging proposal simply to be instructed that 'no sorry, we do not fund multi-12 months projects at all'. that's something that one needs to be informed upfront (or heck, be a part of some public rules so that others will know the principles too).
You appear to have a fatally flawed grasp of how public funding works. If you do not inform folks why you need the cash and how you may spend it, they're unlikely to disburse. Saying I'm brilliant and I do know the problem now hand over the money doesn't even work for most Teachers who have a strong popularity in the sphere; which is why most of them spend >30% of their time writing grant proposals.
> as for getting code upstream, how about you examine the kernel git logs (minus the stuff that was not correctly credited)?
jejb@jarvis> git log|grep -i 'Author: pax.*workforce'|wc -l
1
Stellar, I need to say. And before you mild off on these who have misappropriated your credit score, please do not forget that getting code upstream on behalf of reluctant or incapable actors is a vastly precious and time consuming ability and certainly one of the explanations teams like Linaro exist and are well funded. If extra of your stuff does go upstream, it is going to be because of the not inconsiderable efforts of other people on this area.
You now have a enterprise model promoting non-upstream safety patches to prospects. There's nothing mistaken with that, it is a reasonably traditional first stage business model, however it does relatively rely upon patches not being upstream in the first place, calling into query the earnestness of your try to place them there.
Now here is some free advice in my subject, which is assisting firms align their companies in open supply: The promoting out of tree patch route is at all times an eventual failure, particularly with the kernel, because if the performance is that helpful, it will get upstreamed or reinvented in your despite, leaving you with nothing to promote. In case your business plan B is promoting experience, you might have to bear in mind that it will be a hard sell when you've no out of tree differentiator left and git historical past denies that you had anything to do with the in-tree patches. In reality "crazy security person" will turn out to be a self fulfilling prophecy. The advice? it was obvious to everyone else who learn this, but for you, it's do the upstreaming yourself before it gets achieved for you. That manner you may have a legitimate historical declare to Plan B and also you would possibly also have a Plan A promoting a rollup of upstream observe patches built-in and delivered earlier than the distributions get round to it. Even your software to the CII couldn't be dismissed because your work wasn't going wherever. Your various is to proceed enjoying the position of Cassandra and possibly undergo her eventual fate.

Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]

> Second, for the probably viable items this could be a multi-12 months
> full time job. Is the CII prepared to fund tasks at that level? If not
> we all would end up with lots of unfinished and partially broken options.
please show me the reply to that question. without a definitive 'sure' there isn't a point in submitting a proposal as a result of this is the time frame that in my opinion the job will take and any proposal with that requirement can be shot down immediately and be a waste of my time. and i stand by my declare that such simple basic requirements needs to be public information.
> Stellar, I have to say.
"Lies, damned lies, and statistics". you notice there's a couple of solution to get code into the kernel? how about you utilize your git-fu to find all of the bugreports/suggested fixes that went in due to us? as for specifically me, Greg explicitly banned me from future contributions through af45f32d25cc1 so it is no surprise i do not send patches straight in (and that one commit you found that went in despite mentioned ban is actually a very unhealthy instance because it is also the one that Linus censored for no good purpose and made me determine to by no means send security fixes upstream till that practice modifications).
> You now have a enterprise model selling non-upstream safety patches to prospects.
now? we have had paid sponsorship for our various stable kernel series for 7 years. i would not name it a enterprise mannequin although as it hasn't paid anybody's bills.
> [...]calling into question the earnestness of your attempt to place them there.
i must be lacking one thing here however what try? i've by no means in my life tried to submit PaX upstream (for all the explanations discussed already). the CII mails have been exploratory to see how serious that complete group is about truly securing core infrastructure. in a sense i've got my solutions, there's nothing extra to the story.
as to your free recommendation, let me reciprocate: complex issues do not solve themselves. code solving advanced problems doesn't write itself. individuals writing code solving complicated problems are few and much between that you can see out briefly order. such folks (area consultants) do not work without cost with few exceptions like ourselves. biting the hand that feeds you will solely finish you up in hunger.
PS: since you are so certain about kernel builders' potential to reimplement our code, maybe have a look at what parallel features i nonetheless maintain in PaX despite vanilla having a 'completely-not-reinvented-here' implementation and take a look at to grasp the rationale. or simply have a look at all the CVEs that affected say vanilla's ASLR however didn't affect mine.
PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel safety is a facet undertaking when i'm bored or just waiting for the subsequent kernel to compile (i wish LTO was extra environment friendly).

Posted Nov 8, 2015 2:28 UTC (Sun) by jejb (subscriber, #6654) [Link]

In different phrases, you tried to outline their course of for them ... I can't suppose why that wouldn't work.
> "Lies, damned lies, and statistics".
The problem with advert hominem attacks is that they're singularly ineffective in opposition to a transparently factual argument. I posted a one line command anybody could run to get the variety of patches you've got authored in the kernel. Why don't you publish an equivalent that provides figures you like extra?
> i've by no means in my life tried to submit PaX upstream (for all the explanations mentioned already).
So the grasp plan is to display your experience by the number of patches you haven't submitted? nice plan, world domination beckons, sorry that one obtained away from you, however I'm sure you won't let it occur again.

Posted Nov 8, 2015 2:56 UTC (Sun) by PaXTeam (visitor, #24616) [Link]

what? since when does asking a question define anything? isn't that how we discover out what someone else thinks? is not that what *they* have that webform (never mind the mailing lists) for as properly? in different phrases you admit that my question was not truly answered .
> The issue with advert hominem attacks is that they are singularly ineffective towards a transparently factual argument.
you didn't have an argument to begin with, that is what i defined in the part you fastidiously chose to not quote. i am not right here to defend myself against your clearly idiotic attempts at proving whatever you are making an attempt to show, as they are saying even in kernel circles, code speaks, bullshit walks. you may look at mine and determine what i can or can't do (not that you have the information to know most of it, mind you). that mentioned, there're clearly different extra succesful individuals who've executed so and decided that my/our work was value something else nobody would have been feeding off of it for the previous 15 years and nonetheless counting. and as unimaginable as it might appear to you, life doesn't revolve around the vanilla kernel, not everybody's dying to get their code in there especially when it means to put up with such silly hostility on lkml that you now also demonstrated right here (it is ironic how you got here to the defense of josh who specifically requested folks not to deliver that notorious lkml fashion here. nice job there James.). as for world domination, there're some ways to realize it and something tells me that you are clearly out of your league right here since PaX has already achieved that. you're working such code that implements PaX features as we converse.

Posted Nov 8, 2015 16:52 UTC (Sun) by jejb (subscriber, #6654) [Link]

I posted the one line git script giving your authored patches in response to this authentic request by you (this one, simply in case you've got forgotten http://lwn.internet/Articles/663591/):
> as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not correctly credited)?
I take it, by the way in which you've got shifted ground within the previous threads, that you just want to withdraw that request?

Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (visitor, #24616) [Link]

Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Link]

Please present one that is not unsuitable, or much less flawed. It is going to take less time than you've got already wasted here.

Posted Nov 8, 2015 22:Forty nine UTC (Solar) by PaXTeam (visitor, #24616) [Link]

anyway, since it's you guys who have a bee in your bonnet, let's check your degree of intelligence too. first work out my email handle and mission identify then strive to search out the commits that say they come from there (it brought back some recollections from 2004 already, how times flies! i am surprised i really managed to accomplish this much with explicitly not trying, imagine if i did :). it's an extremely advanced process so by accomplishing it you will prove your self to be the top canine right here on lwn, whatever that's worth ;).

Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Hyperlink]

*shrug* Or don't; you are solely sullying your individual status.

Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]

Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Link]

I wouldn't either

Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Link]

Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (visitor, #62367) [Link]

Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (guest, #24616) [Link]

Posted Nov 12, 2015 13:47 UTC (Thu) by nix (subscriber, #2304) [Hyperlink]

Ah. I thought my memory wasn't failing me. Compare to PaXTeam's response to .
PaXTeam isn't averse to outright lying if it means he gets to seem proper, I see. Maybe PaXTeam's memory is failing, and this apparent contradiction is not a brazen lie, however provided that the two posts were made within a day of one another I doubt it. (PaXTeam's whole unwillingness to assume good religion in others deserves some reflection. Yes, I *do* assume he's lying by implication right here, and doing so when there's virtually nothing at stake. God alone knows what he is keen to stoop to when one thing *is* at stake. Gosh I ponder why his fixes aren't going upstream very quick.)

Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (visitor, #24616) [Link]

> and that one commit you discovered that went in despite said ban
also somebody's ban doesn't suggest it'll translate into another person's execution of that ban as it is clear from the commit in question. it's somewhat sad that it takes a safety repair to expose the fallacy of this policy although. the rest of your pithy advert hominem speaks for itself better than i ever may ;).

Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Link]

Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (guest, #67268) [Link]

I do not see this message in my mailbox, so presumably it received swallowed.

Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]

You are conscious that it is entirely attainable that everyone seems to be mistaken right here , proper?
That the kernel maintainers have to focus extra on safety, that the article was biased, that you are irresponsible to decry the state of safety, and do nothing to assist, and that your patchsets wouldn't help that much and are the fallacious direction for the kernel? That just because the kernel maintainers aren't 100% proper it does not imply you are?

Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (visitor, #5770) [Link]

I feel you may have him backwards there. Jon is comparing this to Mindcraft as a result of he thinks that despite being unpalatable to lots of the neighborhood, the article would possibly in truth include a whole lot of truth.

Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Link]

Posted Nov 9, 2015 15:Thirteen UTC (Mon) by spender (visitor, #23067) [Link]

"There are rumors of darkish forces that drove the article within the hopes of taking Linux down a notch. All of this might effectively be true"
Just as you criticized the article for mentioning Ashley Madison although in the very first sentence of the following paragraph it mentions it did not involve the Linux kernel, you can't give credence to conspiracy theories with out incurring the identical criticism (in other phrases, you cannot play the Glenn Beck "I am simply asking the questions right here!" whose "questions" gas the conspiracy theories of others). Very similar to mentioning Ashley Madison as an example for non-technical readers concerning the prevalence of Linux on the planet, if you are criticizing the point out then mustn't likening a non-FUD article to a FUD article additionally deserve criticism, especially given the rosy, self-congratulatory image you painted of upstream Linux safety?
As the PaX Crew identified in the initial post, the motivations aren't onerous to know -- you made no mention in any respect about it being the fifth in a long-operating collection following a reasonably predictable time trajectory.
No, we did not miss the general analogy you had been trying to make, we just do not assume you'll be able to have your cake and eat it too.
-Brad

Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]

Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]

It is gracious of you not to blame your readers. I determine they're a fair goal: there's that line about those ignorant of history being condemned to re-implement Unix -- as your readers are! :-)
K3n.

Posted Nov 9, 2015 18:43 UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]

Unfortunately, I don't understand neither the "security" of us (PaXTeam/spender), nor the mainstream kernel people when it comes to their attitude. I confess I have totally no technical capabilities on any of these matters, but if all of them decided to work collectively, as an alternative of having countless and pointless flame wars and blame recreation exchanges, numerous the stuff would have been accomplished already. And all of the while everyone concerned might have made one other massive pile of cash on the stuff. All of them appear to want to have a greater Linux kernel, so I've got no idea what the issue is. It appears that evidently no person is prepared to yield any of their positions even a little bit. As a substitute, each sides seem like bent on making an attempt to insult their way into forcing the other aspect to quit. Which, in fact, never works - it just causes more pushback.
Perplexing stuff...

Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]

Posted Nov 9, 2015 19:Forty four UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]

Take a scientific computational cluster with an "air gap", as an example. You'd in all probability need most of the safety stuff turned off on it to realize most performance, because you'll be able to belief all customers. Now take a couple of billion cellphones that may be difficult or sluggish to patch. You'd probably need to kill most of the exploit classes there, if those devices can nonetheless run moderately effectively with most security options turned on.
So, it isn't either/or. It's most likely "it relies upon". However, if the stuff isn't there for everybody to compile/use within the vanilla kernel, it will likely be harder to make it a part of everyday selections for distributors and users.

Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Link]

How sad. This Dijkstra quote comes to mind immediately:
Software engineering, after all, presents itself as one other worthy trigger, but that's eyewash: in case you rigorously learn its literature and analyse what its devotees really do, you will uncover that software engineering has accepted as its charter "How you can program if you can not."

Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]

I guess that fact was too unpleasant to fit into Dijkstra's world view.

Posted Nov 7, 2015 10:Fifty two UTC (Sat) by ms (subscriber, #41272) [Hyperlink]

Certainly. And the attention-grabbing thing to me is that once I attain that time, checks will not be sufficient - model checking at a minimum and actually proofs are the one method forwards. I'm no security knowledgeable, my field is all distributed systems. I understand and have applied Paxos and that i imagine I can clarify how and why it really works to anybody. But I am at the moment doing a little algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No take a look at is sufficient because there are infinite interleavings of events and my head just couldn't cope with engaged on this both at the pc or on paper - I discovered I couldn't intuitively motive about this stuff at all. So I began defining the properties and wished and step by step proving why each of them holds. With out my notes and proofs I can't even clarify to myself, not to mention anyone else, why this thing works. I discover this both utterly obvious that this will happen and utterly terrifying - the maintenance cost of those algorithms is now an order of magnitude larger.

Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Link]

> Certainly. And the fascinating factor to me is that once I reach that point, tests usually are not sufficient - mannequin checking at a minimal and really proofs are the only means forwards.
Or are you just utilizing the mistaken maths? Hobbyhorse time once more :-) but to quote a fellow Pick developer ... "I usually walk right into a SQL growth store and see that wall - you recognize, the one with the massive SQL schema that no-one totally understands on it - and marvel how I can simply hold all the schema for a Decide database of the identical or greater complexity in my head".
However it's easy - by education I'm a Chemist, by interest a Bodily Chemist (and by career an unemployed programmer :-). And when I am fascinated by chemistry, I can ask myself "what's an atom fabricated from" and suppose about things just like the sturdy nuclear drive. Subsequent level up, how do atoms stick collectively and make molecules, and assume concerning the electroweak power and electron orbitals, and how do chemical reactions happen. Then I think about molecules stick together to make supplies, and suppose about metals, and/or Van de Waals, and stuff.
Point is, it's worthwhile to *layer* stuff, and look at issues, and say "how can I break up parts off into 'black bins' so at anyone level I can assume the other ranges 'simply work'". For instance, with Pick a FILE (table to you) shops a class - a set of an identical objects. One object per Record (row). And, similar as relational, one attribute per Subject (column). Are you able to map your relational tables to actuality so simply? :-)
Going again THIRTY years, I remember a story about a guy who built little laptop crabs, that might quite fortunately scuttle round within the surf zone. As a result of he did not try to work out how to resolve all the problems directly - every of his (extremely puny by right this moment's requirements - this is the 8080/Z80 era!) processors was set to simply process slightly bit of the problem and there was no central "mind". But it labored ... Perhaps it's best to simply write a bunch of small modules to unravel every particular person problem, and let final answer "just occur".
Cheers,
Wol

Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (guest, #60862) [Link]

To my understanding, this is strictly what a mathematical abstraction does. For example in Z notation we might assemble schemas for the varied modifying ("delta") operations on the base schema, and then argue about preservation of formal invariants, properties of the outcome, and transitivity of the operation when chained with itself, or the previous aggregate schema composed of schemas A by way of O (for which they've been already argued).
The outcome is a set of operations that, executed in arbitrary order, result in a set of properties holding for the end result and outputs. Thus proving the formal design right (w/ caveat lectors regarding scope, correspondence with its implementation [though that can be confirmed as effectively], and read-only ["xi"] operations).

Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]

Looking through the history of computing (and possibly plenty of other fields too), you may probably find that folks "cannot see the wood for the timber" extra usually that not. They dive into the element and fully miss the massive picture.
(Drugs, and interest of mine, suffers from that too - I remember anyone speaking about the guide eager to amputate a gangrenous leg to save somebody's life - oblivious to the truth that the affected person was dying of cancer.)
Cheers,
Wol

Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]

https://www.youtube.com/watch?v=VpuVDfSXs-g
(LCA 2015 - "Programming Thought of Dangerous")
FWIW, I believe that this talk may be very relevant to why writing safe software program is so onerous..
-Dave.

Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Link]

Whereas we're spending hundreds of thousands at a large number of security problems, kernel issues will not be on our prime-precedence listing. Actually I remember only as soon as having discussing a kernel vulnerability. The results of the evaluation has been that each one our methods were operating kernels that had been older because the kernel that had the vulnerability.
But "patch management" is a real subject for us. Software program must continue to work if we set up security patches or update to new releases due to the end-of-life policy of a vendor. The income of the company is depending on the IT methods operating. So "not breaking consumer space" is a safety function for us, as a result of a breakage of one component of our several ten thousands of Linux methods will cease the roll-out of the safety update.
Another problem is embedded software or firmware. Nowadays virtually all hardware programs embrace an operating system, usually some Linux version, offering a fill network stack embedded to help remote management. Often those methods don't survive our obligatory security scan, because vendors still didn't replace the embedded openssl.
The true problem is to offer a software program stack that can be operated within the hostile environment of the Web sustaining full system integrity for ten years and even longer without any buyer upkeep. The current state of software engineering would require assist for an automated replace process, however distributors should understand that their enterprise model must be capable of finance the assets providing the updates.
Total I am optimistic, networked software program just isn't the first expertise utilized by mankind causing issues that have been addressed later. Steam engine use might result in boiler explosions but the "engineers" were able to cut back this danger significantly over a few many years.

Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]

The next is all guess work; I would be eager to know if others have evidence either a method or another on this: The individuals who learn how to hack into these systems by way of kernel vulnerabilities know that they skills they've learnt have a market. Thus they do not are likely to hack so as to wreak havoc - certainly on the entire where information has been stolen as a way to release and embarrass people, it _appears_ as though those hacks are by means of a lot easier vectors. I.e. lesser skilled hackers discover there is a complete load of low-hanging fruit which they'll get at. They don't seem to be being paid forward of time for the data, in order that they flip to extortion as an alternative. They don't cover their tracks, and they will typically be discovered and charged with criminal offences.
So in case your security meets a certain primary stage of proficiency and/or your organization is not doing anything that places it near the top of "corporations we'd prefer to embarrass" (I believe the latter is way simpler at preserving methods "safe" than the previous), then the hackers that get into your system are more likely to be expert, paid, and possibly not going to do much injury - they're stealing data for a competitor / state. So that does not bother your bottom line - a minimum of not in a approach which your shareholders will be aware of. So why fund security?

Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (guest, #82661) [Hyperlink]

On the other hand, some efficient mitigation in kernel level would be very helpful to crush cybercriminal/skiddie's strive. If one of your customer running a future buying and selling platform exposes some open API to their shoppers, and if the server has some reminiscence corruption bugs will be exploited remotely. Then you realize there are identified assault strategies( equivalent to offset2lib) may help the attacker make the weaponized exploit so much easier. Will you clarify the failosophy "A bug is bug" to your customer and tell them it'd be okay? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp.
To probably the most commercial makes use of, extra safety mitigation inside the software will not cost you extra price range. You may nonetheless have to do the regression test for each improve.

Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]

Remember the fact that I specialize in exterior web-primarily based penetration-exams and that in-house assessments (native LAN) will likely yield totally different outcomes.

Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link]

I keep reading this headline as "a new Minecraft second", and considering that perhaps they've decided to observe up the .Internet thing by open-sourcing Minecraft. Oh well. I imply, safety is good too, I suppose.

Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]

Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Hyperlink]

Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Hyperlink]

Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (guest, #33164) [Link]

Posted Nov 9, 2015 15:53 UTC (Mon) by neiljerram (subscriber, #12005) [Hyperlink]

(Oh, and I was additionally nonetheless wondering how Minecraft had taught us about Linux performance - so thanks to the other remark thread that pointed out the 'd', not 'e'.)

Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (guest, #4654) [Hyperlink]

I would identical to so as to add that in my view, there's a general drawback with the economics of pc safety, which is particularly seen presently. Two problems even possibly.
First, the money spent on pc safety is usually diverted towards the so-known as security "circus": fast, straightforward options that are primarily chosen simply so as to "do one thing" and get higher press. It took me a long time - possibly many years - to assert that no security mechanism in any respect is better than a bad mechanism. However now I firmly believe on this perspective and would somewhat take the risk knowingly (offered that I can save money/useful resource for myself) than take a bad approach at solving it (and don't have any cash/useful resource left once i understand I should have executed something else). And that i discover there are a lot of unhealthy or incomplete approaches at the moment accessible in the computer security subject.
Those spilling our rare money/assets on prepared-made useless instruments ought to get the dangerous press they deserve. And, we definitely need to enlighten the press on that as a result of it's not so easy to understand the efficiency of protection mechanisms (which, by definition, should prevent issues from happening).
Second, and that could be more recent and extra worrying. The circulation of cash/resource is oriented within the direction of attack tools and vulnerabilities discovery much greater than in the course of latest protection mechanisms.
This is especially worrying as cyber "protection" initiatives look an increasing number of like the usual idustrial tasks geared toward producing weapons or intelligence systems. Furthermore, unhealthy ineffective weapons, because they're solely working towards our very susceptible present programs; and bad intelligence methods as even fundamental college-stage encryption scares them right down to ineffective.
Nevertheless, all the ressources are for these grownup teenagers playing the white hat hackers with not-so-tough programming tips or community monitoring or WWI-degree cryptanalysis. And now also for the cyberwarriors and cyberspies which have yet to show their usefulness totally (especially for peace safety...).
Personnally, I'd fortunately depart them all the hype; but I'll forcefully declare that they haven't any proper in any respect on any of the finances allocation choices. Solely these working on safety ought to. And yep, it means we must always resolve the place to put there resources. We've to say the unique lock for ourselves this time. (and I assume the PaXteam could be among the first to learn from such a change).
While occupied with it, I wouldn't even depart white-hat or cyber-guys any hype in the end. That is extra publicity than they deserve.
I crave for the day I will read within the newspaper that: "One other of these ailing suggested debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well known virus program code exploiting a programmer mistake and managed however to convey a kind of unfinished and unhealthy quality applications, X, that we are all obliged to make use of to its knees, annoying millions of normal customers with his unlucky cyber-vandalism. All the protection consultants unanimously recommend that, as soon as once more, the budget of the cyber-command be retargetted, or not less than leveled-off, as a way to bring more security engineer positions in the tutorial area or civilian trade. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional on this affair."

Hmmm - cyber-hooligans - I just like the label. Though it doesn't apply properly to the battlefield-oriented variant.

Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Link]

The state of 'software program security business' is a f-ng catastrophe. Failure of the highest order. There is massive amounts of cash that goes into 'cyber safety', but it's usually spent on government compliance and audit efforts. This implies as a substitute of actually putting effort into correcting issues and mitigating future issues, the majority of the trouble goes into taking current purposes and making them conform to committee-driven guidelines with the minimal amount of effort and adjustments.
Some level of regulation and standardization is totally needed, however lay persons are clueless and are fully unable to discern the distinction between anyone who has useful expertise versus some company that has spent hundreds of thousands on slick marketing and 'native advertising' on large websites and laptop magazines. The individuals with the money sadly only have their very own judgment to depend on when shopping for into 'cyber safety'.
> These spilling our rare cash/resources on prepared-made useless instruments should get the bad press they deserve.
There is no such thing as a such factor as 'our uncommon money/sources'. You've gotten your money, I've mine. Money being spent by some company like Redhat is their cash. Cash being spent by governments is the federal government's cash. (you, literally, have much more management in how Walmart spends it is money then over what your authorities does with their's)
> This is especially worrying as cyber "protection" initiatives look an increasing number of like the standard idustrial projects aimed at producing weapons or intelligence programs. Furthermore, dangerous ineffective weapons, as a result of they're only working against our very weak current systems; and dangerous intelligence programs as even primary college-level encryption scares them down to ineffective.
Having safe software with sturdy encryption mechanisms in the hands of the general public runs counter to the pursuits of most main governments. Governments, like any other for-profit group, are primarily fascinated by self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is Far more valuable to them then attempting to assist the public have a secure mechanism for making phone calls. Especially when these safe mechanisms interfere with knowledge assortment efforts.
Unfortunately you/I/us can not depend on some magical benefactor with deep pockets to sweep in and make Linux better. It is just not going to happen.
Corporations like Redhat have been massively helpful to spending assets to make Linux kernel more succesful.. however they're pushed by a the need to turn a profit, which means they should cater on to the the type of necessities established by their buyer base. Prospects for EL are usually way more centered on decreasing costs related to administration and software growth then security on the low-stage OS.
Enterprise Linux clients are inclined to rely on bodily, human coverage, and network safety to protect their 'soft' interiors from being exposed to exterior threats.. assuming (rightly) that there is very little they can do to really harden their systems. In truth when the choice comes between security vs comfort I am certain that the majority customers will fortunately defeat or strip out any security mechanisms launched into Linux.
On prime of that when most Enterprise software is extraordinarily unhealthy. So much in order that 10 hours spent on improving a web entrance-end will yield more real-world safety advantages then a 1000 hours spent on Linux kernel bugs for most businesses.
Even for 'normal' Linux customers a security bug in their Firefox's NAPI flash plugin is way more devastating and poses a massively greater risk then a obscure Linux kernel buffer over movement downside. It's simply not really essential for attackers to get 'root' to get entry to the essential information... typically all of which is contained in a single consumer account.
Ultimately it is up to individuals like you and myself to put the trouble and money into enhancing Linux safety. For each ourselves and other people.

Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Link]

Spilling has all the time been the case, however now, to me and in laptop security, most of the cash seems spilled as a consequence of unhealthy faith. And this is generally your cash or mine: either tax-fueled governemental assets or company prices that are straight reimputed on the costs of goods/software program we are informed we are *obliged* to buy. (Have a look at company firewalls, home alarms or antivirus software marketing discourse.)
I think it's time to point out that there are a number of "malicious malefactors" round and that there's a real have to identify and sanction them and confiscate the sources they have someway managed to monopolize. And i do *not* suppose Linus is amongst such culprits by the way in which. But I think he could also be amongst those hiding their heads in the sand concerning the aforementioned evil actors, whereas he most likely has more leverage to counteract them or oblige them to reveal themselves than many people.
I discover that to be of brown-paper-bag degree (although head-in-the-sand is someway a brand new interpretation).
Ultimately, I believe you are right to say that currently it's solely as much as us individuals to attempt honestly to do something to improve Linux or pc security. But I nonetheless assume that I'm proper to say that this is not normal; particularly whereas some very serious folks get very critical salaries to distribute randomly some troublesome to evaluate budgets.
[1] A paradoxical situation while you give it some thought: in a domain where you're at first preoccupied by malicious people everyone ought to have factual, clear and honest habits as the first precedence in their mind.

Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Hyperlink]

It even has a nice, seven line Fundamental-pseudo-code that describes the present scenario and clearly exhibits that we're caught in an countless loop. It does not answer the large query, though: How to write down higher software.
The unhappy thing is, that this is from 2005 and all the things that were obviously stupid ideas 10 years ago have proliferated even more.

Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]

Word IMHO, we must always examine additional why these dumb issues proliferate and get a lot help.
If it is only human psychology, nicely, let's combat it: e.g. Mozilla has shown us that they will do wonderful things given the fitting message.
If we're facing active people exploiting public credulity: let's identify and battle them.
But, extra importantly, let's capitalize on this knowledge and safe *our* methods, to showcase at a minimum (and extra later on in fact).
Your reference conclusion is particularly nice to me. "challenge [...] the conventional knowledge and the established order": that job I'd fortunately accept.

Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]

That rant is itself a bunch of "empty calories". The converse to the objects it rants about, which it is suggesting at some level, can be as dangerous or worse, and indicative of the worst sort of security pondering that has put lots of people off. Alternatively, it's only a rant that offers little of worth.
Personally, I think there is no magic bullet. Safety is and at all times has been, in human history, an arms race between defenders and attackers, and one that's inherently a trade-off between usability, risks and prices. If there are mistakes being made, it's that we should always most likely spend more assets on defences that could block whole lessons of assaults. E.g., why is the GRSec kernel hardening stuff so arduous to apply to regular distros (e.g. there's no reliable supply of a GRSec kernel for Fedora or RHEL, is there?). Why does all the Linux kernel run in one safety context? Why are we still writing plenty of software in C/C++, typically without any basic safety-checking abstractions (e.g. fundamental bounds-checking layers in between I/O and parsing layers, say)? Can hardware do more to provide security with velocity?
No doubt there are lots of people engaged on "block lessons of attacks" stuff, the query is, why aren't there extra assets directed there?

Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link]

>There are quite a lot of reasons why Linux lags behind in defensive safety applied sciences, but one in all the important thing ones is that the companies making a living on Linux have not prioritized the development and integration of these technologies.
This looks like a motive which is admittedly worth exploring. Why is it so?
I believe it isn't apparent why this doesn't get some more consideration. Is it doable that the individuals with the cash are right not to more highly prioritise this? Afterall, what interest do they have in an unsecure, exploitable kernel? Where there may be frequent trigger, linux growth will get resourced. It's been this fashion for a few years. If filesystems qualify for common interest, absolutely security does. So there would not appear to be any apparent cause why this situation does not get more mainstream consideration, besides that it actually already will get sufficient. Chances are you'll say that disaster has not struck yet, that the iceberg has not been hit. However it appears to be that the linux improvement course of isn't overly reactive elsewhere.

Posted Nov 10, 2015 15:53 UTC (Tue) by raven667 (subscriber, #5198) [Link]

That is an fascinating query, actually that's what they really imagine no matter what they publicly say about their commitment to safety technologies. What is the actually demonstrated draw back for Kernel builders and the organizations that pay them, as far as I can tell there is not sufficient consequence for the lack of Safety to drive extra funding, so we're left begging and cajoling unconvincingly.

Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (guest, #4654) [Hyperlink]

The important thing challenge with this domain is it pertains to malicious faults. So, when penalties manifest themselves, it is simply too late to act. And if the current commitment to an absence of voluntary strategy persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia.
Admittedly, kernel developpers appear pretty resistant to paranoia. That is an effective factor. However I'm ready for the days where armed land-drones patrol US streets within the neighborhood of their youngsters colleges for them to find the feeling. They are not so distants the times when innocent lives will unconsciouly depend on the security of (linux-based) pc methods; underneath water, that is already the case if I remember correctly my final dive, as well as in several latest automobiles according to some reports.

Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Link]

Traditional hosting companies that use Linux as an uncovered entrance-end system are retreating from improvement while HPC, cellular and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel in their directions.
This is really not that stunning: For internet hosting needs the kernel has been "completed" for quite a while now. In addition to assist for current hardware there just isn't much use for newer kernels. Linux 3.2, and even older, works just fantastic.
Hosting doesn't want scalability to a whole bunch or thousands of CPU cores (one uses commodity hardware), advanced instrumentation like perf or tracing (systems are locked down as much as possible) or superior power-management (if the system doesn't have fixed excessive load, it's not making enough money). So why ought to hosting firms still make robust investments in kernel improvement? Even if they had one thing to contribute, the hurdles for contribution have develop into greater and higher.
For his or her safety needs, internet hosting corporations already use Grsecurity. I haven't any numbers, however some experience suggests that Grsecurity is principally a fixed requirement for shared hosting.
On the other hand, kernel security is almost irrelevant on nodes of a brilliant computer or on a system operating large enterprise databases which are wrapped in layers of center-ware. And cell vendors simply don't care.

Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]

Linking

Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Hyperlink]

Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]

The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am positive the system's arduous drives have been despatched off for forensic examination, and we have all been waiting patiently for the reply to the most important query: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, right by April 1st, 2013, kernel.org included this word at the top of the location Information: 'Due to all to your patience and understanding during our outage and please bear with us as we deliver up the totally different kernel.org techniques over the next few weeks. We can be writing up a report on the incident sooner or later.' (Emphasis added.) That remark was removed (together with the rest of the positioning Information) during a Might 2013 edit, and there hasn't been -- to my information -- a peep about any report on the incident since then. This has been disappointing. When the Debian Venture found sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted an excellent public report on precisely what occurred. Likewise, the Apache Basis likewise did the best thing with good public autopsies of the 2010 Internet site breaches. Arstechnica's Dan Goodin was still attempting to observe up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years in the past. He wrote: Linux developer and maintainer Greg Kroah-Hartman advised Ars that the investigation has yet to be completed and gave no timetable for when a report may be launched. [...] Kroah-Hartman also instructed Ars kernel.org programs have been rebuilt from scratch following the assault. Officials have developed new instruments and procedures since then, however he declined to say what they are. "There will likely be a report later this 12 months about site [sic] has been engineered, but don't quote me on when will probably be released as I am not liable for it," he wrote.
Who's responsible, then? Is anyone? Anybody? Bueller? Or is it a state secret, or what? Two years since Greg K-H said there can be a report 'later this yr', and 4 years since the meltdown, nothing yet. How about some information? Rick Moen
[email protected]

Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Hyperlink]

Much less severely, word that if even the Linux mafia does not know, it must be the venusians; they're notoriously stealth in their invasions.

Posted Nov 14, 2015 12:Forty six UTC (Sat) by error27 (subscriber, #8346) [Hyperlink]

I do know the kernel.org admins have given talks about some of the brand new protections which have been put into place. There are not any more shell logins, instead the whole lot uses gitolite. The totally different providers are on totally different hosts. There are more kernel.org employees now. Individuals are utilizing two issue identification. Another stuff. Do a seek for Konstantin Ryabitsev.

Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Link]

I beg your pardon if I was in some way unclear: That was stated to have been the trail of entry to the machine (and that i can readily consider that, as it was also the exact path to entry into shells.sourceforge.web, a few years prior, around 2002, and into many different shared Web hosts for a few years). But that isn't what is of major curiosity, and is not what the forensic research lengthy promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator within the August 2011 Dan Goodin article you cited: 'How they managed to use that to root entry is currently unknown and is being investigated'. Okay, people, you've now had 4 years of investigation. What was the path of escalation to root? (Also, different particulars that might logically be covered by a forensic research, reminiscent of: Whose key was stolen? Who stole the important thing?) That is the sort of autopsy was promised prominently on the entrance web page of kernel.org, to reporters, and elsewhere for a very long time (and then summarily eliminated as a promise from the front web page of kernel.org, with out comment, together with the remainder of the site News section, and apparently dropped). It nonetheless can be acceptable to know and share that data. Especially the datum of whether the trail to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen
[email protected]

Posted Nov 22, 2015 12:42 UTC (Solar) by rickmoen (subscriber, #6943) [Link]

I've performed a closer review of revelations that got here out quickly after the break-in, and assume I've found the reply, by way of a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days before the general public was knowledgeable), plus Aug. 31st feedback to The Register's Dan Goodin by 'two safety researchers who were briefed on the breach': Root escalation was through exploit of a Linux kernel security gap: Per the 2 safety researchers, it was one both extraordinarily embarrassing (wide-open access to /dev/mem contents together with the operating kernel's picture in RAM, in 2.6 kernels of that day) and identified-exploitable for the prior six years by canned 'sploits, one among which (Phalanx) was run by some script kiddie after entry utilizing stolen dev credentials. Different tidbits: - Site admins left the foundation-compromised Internet servers running with all providers still lit up, for multiple days. - Site admins and Linux Foundation sat on the data and failed to inform the general public for those same a number of days. - Site admins and Linux Basis have never revealed whether or not trojaned Linux supply tarballs had been posted within the http/ftp tree for the 19+ days earlier than they took the positioning down. (Yes, git checkout was superb, but what in regards to the thousands of tarball downloads?) - After promising a report for a number of years after which quietly removing that promise from the entrance web page of kernel.org, Linux Foundation now stonewalls press queries.
I posted my finest attempt at reconstructing the story, absent a real report from insiders, to SVLUG's fundamental mailing listing yesterday. (Essentially, there are surmises. If the people with the facts have been more forthcoming, we'd know what happened for sure.) I do have to wonder: If there's another embarrassing screwup, will we even be informed about it in any respect? Rick Moen
[email protected]

Posted Nov 22, 2015 14:25 UTC (Sun) by spender (guest, #23067) [Hyperlink]

Additionally, it's preferable to use stay memory acquisition prior to powering off the system, in any other case you lose out on memory-resident artifacts you can perform forensics on.
-Brad

How concerning the lengthy overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 22, 2015 16:28 UTC (Sun) by rickmoen (subscriber, #6943) [Link]

Thanks for your comments, Brad. I might been relying on Dan Goodin's declare of Phalanx being what was used to realize root, in the bit where he cited 'two safety researchers who have been briefed on the breach' to that impact. Goodin also elaborated: 'Fellow safety researcher Dan Rosenberg mentioned he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the first time I've heard of a rootkit being claimed to be bundled with an assault tool, and i famous that oddity in my posting to SVLUG. That having been stated, yeah, the Phalanx README doesn't particularly declare this, so then possibly Goodin and his a number of 'security researcher' sources blew that element, and no one but kernel.org insiders but is aware of the escalation path used to gain root. Additionally, it is preferable to use stay memory acquisition prior to powering off the system, in any other case you lose out on reminiscence-resident artifacts that you would be able to carry out forensics on.
Arguable, but a tradeoff; you'll be able to poke the compromised stay system for state data, but with the drawback of leaving your system running below hostile control. I was at all times taught that, on stability, it's higher to pull energy to finish the intrusion. Rick Moen
[email protected]

Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (visitor, #88005) [Link]

Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Hyperlink]

With "something" you imply those who produce these closed supply drivers, right?
If the "consumer product firms" simply caught to using parts with mainlined open source drivers, then updating their merchandise could be a lot simpler.

A brand new Mindcraft moment?

Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]

They've ring 0 privilege, can entry protected reminiscence instantly, and can't be audited. Trick a kernel into working a compromised module and it's game over.
Even tickle a bug in a "good" module, and it's probably recreation over - on this case fairly actually as such modules are typically video drivers optimised for games ...

Read More: https://hexnet.biz/
     
 
what is notes.io
 

Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...

With notes.io;

  • * You can take a note from anywhere and any device with internet connection.
  • * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
  • * You can quickly share your contents without website, blog and e-mail.
  • * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
  • * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.

Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.

Easy: Notes.io doesn’t require installation. Just write and share note!

Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )

Free: Notes.io works for 12 years and has been free since the day it was started.


You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;


Email: [email protected]

Twitter: http://twitter.com/notesio

Instagram: http://instagram.com/notes.io

Facebook: http://facebook.com/notesio



Regards;
Notes.io Team

     
 
Shortened Note Link
 
 
Looding Image
 
     
 
Long File
 
 

For written notes was greater than 18KB Unable to shorten.

To be smaller than 18KB, please organize your notes, or sign in.