NotesWhat is notes.io?

Notes brand slogan

Notes - notes.io

A New Mindcraft Moment?
Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Hyperlink]

1. this WP article was the 5th in a series of articles following the safety of the web from its beginnings to related topics of right this moment. discussing the safety of linux (or lack thereof) matches properly in there. it was additionally a well-researched article with over two months of analysis and interviews, something you cannot fairly claim your self to your latest items on the topic. you don't just like the info? then say so. and even higher, do something constructive about them like Kees and others have been making an attempt. nonetheless silly comparisons to previous crap like the Mindcraft studies and fueling conspiracies don't exactly assist your case.
2. "We do an affordable job of discovering and fixing bugs."
let's begin right here. is this assertion primarily based on wishful considering or chilly exhausting information you're going to share in your response? in response to Kees, the lifetime of safety bugs is measured in years. that's greater than the lifetime of many units individuals buy and use and ditch in that interval.
3. "Problems, whether they are security-associated or not, are patched rapidly,"
some are, some aren't: let's not forget the recent NMI fixes that took over 2 months to trickle down to stable kernels and we even have a user who has been ready for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-systems.btrfs/49500 (FYI, the overflow plugin is the first one Kees is making an attempt to upstream, think about the shitstorm if bugreports can be handled with this angle, let's hope btrfs guys are an exception, not the rule). anyway, two examples aren't statistics, so as soon as again, do you have numbers or is all of it wishful thinking? (it's partly a trick query as a result of you'll also have to elucidate how one thing gets to be determined to be security associated which as we all know is a messy business in the linux world)
4. "and the stable-update mechanism makes these patches out there to kernel customers."
besides when it doesn't. and yes, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree.
5. "Particularly, the few builders who're working on this space have never made a critical attempt to get that work integrated upstream."
you do not need to be shy about naming us, in any case you did so elsewhere already. and we additionally defined the the explanation why we have not pursued upstreaming our code: https://lwn.internet/Articles/538600/ . since i do not anticipate you and your readers to learn any of it, here's the tl;dr: if you would like us to spend hundreds of hours of our time to upstream our code, you'll have to pay for it. no ifs no buts, that is how the world works, that's how >90% of linux code gets in too. i personally find it fairly hypocritic that properly paid kernel builders are bitching about our unwillingness and inability to serve them our code on a silver platter without cost. and before somebody brings up the CII, go test their mail archives, after some preliminary exploratory discussions i explicitly requested them about supporting this lengthy drawn out upstreaming work and received no answers.

Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]

Money (aha) quote :
> I suggest you spend none of your free time on this. Zero. I suggest you get paid to do that. And well.
Nobody anticipate you to serve your code on a silver platter for free. The Linux foundation and huge companies utilizing Linux (Google, Pink Hat, Oracle, Samsung, and so on.) ought to pay security specialists such as you to upstream your patchs.

Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Hyperlink]

I would simply like to level out that the way you phrased this makes your remark a tone argument[1][2]; you've got (probably unintentionally) dismissed all the parent's arguments by pointing at its presentation. The tone of PAXTeam's comment displays the frustration built up over time with the way issues work which I feel ought to be taken at face value, empathized with, and understood reasonably than simply dismissed.
1. http://rationalwiki.org/wiki/Tone_argument
2. http://geekfeminism.wikia.com/wiki/Tone_argument
Cheers,

Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Link]

Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (guest, #24616) [Link]

why, is upstream identified for its basic civility and decency? have you even learn the WP submit underneath discussion, by no means thoughts past lkml traffic?

Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]

Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (guest, #58961) [Hyperlink]

No Argument

Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]

Please don't; it would not belong there both, and it particularly does not need a cheering part because the tech press (LWN typically excepted) tends to offer.

Posted Nov 8, 2015 8:36 UTC (Solar) by gmatht (visitor, #58961) [Link]

Okay, however I used to be thinking of Linus Torvalds

Posted Nov 8, 2015 16:11 UTC (Sun) by pbonzini (subscriber, #60935) [Link]

Posted Nov 6, 2015 22:43 UTC (Fri) by PaXTeam (visitor, #24616) [Link]

Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Hyperlink]

Why should you assume solely cash will repair this drawback? Yes, I agree extra resources must be spent on fixing Linux kernel security issues, however don't assume somebody giving a company (ahem, PAXTeam) money is the only solution. (Not mean to impugn PAXTeam's safety efforts.)

The Linux development neighborhood may have had the wool pulled over its collective eyes with respect to safety points (either real or perceived), however merely throwing cash at the issue will not repair this.

And sure, I do realize the industrial Linux distros do lots (most?) of the kernel growth lately, and that implies indirect financial transactions, but it is a lot more involved than just that.

Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Link]

Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link]

Posted Nov 7, 2015 9:49 UTC (Sat) by PaXTeam (guest, #24616) [Link]

Posted Nov 6, 2015 23:Thirteen UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]

I believe you positively agree with the gist of Jon's argument... not enough focus has been given to safety in the Linux kernel... the article will get that half right... money hasn't been going in the direction of safety... and now it must. Aren't you glad?

Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (visitor, #24616) [Link]

they talked to spender, not me personally, but yes, this side of the coin is well represented by us and others who had been interviewed. the identical approach Linus is an efficient representative of, well, his personal pet project known as linux.
> And if Jon had solely talked to you, his would have been too.
on condition that i am the author of PaX (a part of grsec) yes, speaking to me about grsec issues makes it among the finest methods to analysis it. but if you recognize of another person, be my guest and identify them, i am pretty positive the recently formed kernel self-safety of us could be dying to have interaction them (or not, i don't suppose there is a sucker on the market with thousands of hours of free time on their hand).
> [...]it also contained fairly a number of of groan-worthy statements.
nothing is ideal however considering the audience of the WP, this is one among the higher journalistic pieces on the subject, no matter the way you and others don't like the sorry state of linux safety uncovered in there. if you would like to debate extra technical details, nothing stops you from talking to us ;).
talking of your complaints about journalistic qualities, since a earlier LWN article noticed it match to incorporate a number of typical dismissive claims by Linus about the standard of unspecified grsec features with no proof of what experience he had with the code and how current it was, how come we didn't see you or anyone else complaining about the standard of that article?
> Aren't you glad?
no, or not but anyway. i've heard lots of empty phrases over the years and nothing ever manifested or worse, all the money has gone to the pointless exercise of fixing individual bugs and associated circus (that Linus rightfully despises FWIW).

Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Link]

Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Link]

Right now we've obtained builders from huge names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Unfortunately, the encircling cultural attitude of builders is to hit purposeful targets, and often performance targets. Safety targets are often ignored. Ideally, the tradition would shift in order that we make it tough to comply with insecure habits, patterns or paradigms -- that may be a activity that may take a sustained effort, not merely the upstreaming of patches.
Regardless of the culture, these patches will go upstream eventually anyway because the ideas that they embody are actually timely. I can see a option to make it happen: Linus will accept them when a giant end-person (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here's a set of enhancements, we're already utilizing them to resolve this type of downside, this is how every part will remain working as a result of $evidence, be aware carefully that you are staring down the barrels of a fork as a result of your tree is now evolutionarily disadvantaged'. It is a sport and can be gamed; I might favor that the community shepherds users to observe the pattern of declaring drawback + resolution + functional test proof + performance check evidence + security take a look at proof.
K3n.

Posted Nov 9, 2015 6:Forty nine UTC (Mon) by jospoortvliet (guest, #33164) [Link]

And about that fork barrel: I'd argue it is the other manner round. Google forked and lost already.

Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (visitor, #99377) [Hyperlink]

Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]

Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Link]

So I have to confess to a certain quantity of confusion. I might swear that the article I wrote stated precisely that, but you've put a fair amount of effort into flaming it...?

Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (guest, #24616) [Link]

Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Link]

I personally think you and Nick Krause share opposite sides of the identical coin. Programming ability and basic civility.

Posted Nov 6, 2015 22:59 UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]

Posted Nov 7, 2015 0:Sixteen UTC (Sat) by rahvin (guest, #16953) [Hyperlink]

I hope I'm flawed, but a hostile perspective is not going to help anyone receives a commission. It's a time like this where one thing you appear to be an "professional" at and there is a demand for that experience where you show cooperation and willingness to take part as a result of it is a possibility. I'm relatively shocked that somebody doesn't get that, but I am older and have seen just a few of these opportunities in my profession and exploited the hell out of them. You only get a couple of of these in the common career, and handful at the most.
Typically it's important to spend money on proving your abilities, and this is a type of moments. It appears the Kernel community could lastly take this security lesson to heart and embrace it, as stated in the article as a "mindcraft moment". This is an opportunity for builders that may need to work on Linux safety. Some will exploit the opportunity and others will thumb their noses at it. Ultimately these developers that exploit the opportunity will prosper from it.
I feel outdated even having to write down that.

Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]

Perhaps there is a rooster and egg problem here, but when looking for out and funding people to get code upstream, it helps to select individuals and teams with a historical past of with the ability to get code upstream.
It's perfectly reasonable to want working out of tree, offering the flexibility to develop impressive and significant safety advances unconstrained by upstream necessities. That's work someone may additionally wish to fund, if that meets their needs.

Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]

Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]

You make this argument (implying you do analysis and Josh does not) and then fail to assist it by any cite. It would be rather more convincing if you happen to hand over on the Onus probandi rhetorical fallacy and actually cite facts.
> living proof, it was *them* who prompt that they would not fund out-of-tree work but would consider funding upstreaming work, except when pressed for the small print, all i obtained was silence.
For these following along at residence, that is the relevant set of threads:
http://lists.coreinfrastructure.org/pipermail/cii-focus on...
A fast precis is that they advised you your mission was unhealthy because the code was by no means going upstream. You informed them it was because of kernel developers attitude so they need to fund you anyway. They advised you to submit a grant proposal, you whined more in regards to the kernel attitudes and eventually even your apologist advised you that submitting a proposal is perhaps the neatest thing to do. At that point you went silent, not vice versa as you suggest above.
> obviously i won't spend time to write down up a begging proposal just to be told that 'no sorry, we do not fund multi-year initiatives in any respect'. that is something that one ought to be instructed in advance (or heck, be part of some public guidelines so that others will know the foundations too).
You appear to have a fatally flawed grasp of how public funding works. If you don't tell folks why you need the cash and how you may spend it, they're unlikely to disburse. Saying I'm sensible and I know the issue now hand over the cash would not even work for most Academics who've a solid fame in the sector; which is why most of them spend >30% of their time writing grant proposals.
> as for getting code upstream, how about you examine the kernel git logs (minus the stuff that was not correctly credited)?
jejb@jarvis> git log|grep -i 'Creator: pax.*workforce'|wc -l
1
Stellar, I must say. And earlier than you light off on these who have misappropriated your credit, please keep in mind that getting code upstream on behalf of reluctant or incapable actors is a hugely valuable and time consuming talent and one in all the explanations groups like Linaro exist and are effectively funded. If extra of your stuff does go upstream, it will be due to the not inconsiderable efforts of other people in this area.
You now have a enterprise model promoting non-upstream security patches to clients. There's nothing flawed with that, it's a reasonably typical first stage business mannequin, nevertheless it does rather rely upon patches not being upstream in the first place, calling into query the earnestness of your attempt to place them there.
Now here's some free advice in my subject, which is assisting corporations align their businesses in open source: The selling out of tree patch route is always an eventual failure, notably with the kernel, because if the functionality is that helpful, it gets upstreamed or reinvented in your despite, leaving you with nothing to sell. In case your marketing strategy B is selling experience, you've got to keep in mind that it's going to be a tough sell when you've got no out of tree differentiator left and git history denies that you just had anything to do with the in-tree patches. In truth "loopy safety person" will change into a self fulfilling prophecy. The advice? it was obvious to everyone else who learn this, however for you, it is do the upstreaming your self before it gets achieved for you. That manner you might have a respectable historic declare to Plan B and also you would possibly also have a Plan A selling a rollup of upstream track patches built-in and delivered before the distributions get around to it. Even your application to the CII couldn't be dismissed as a result of your work wasn't going wherever. Your various is to continue enjoying the function of Cassandra and possibly undergo her eventual fate.

Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Link]

> Second, for the potentially viable pieces this would be a multi-12 months
> full time job. Is the CII prepared to fund initiatives at that degree? If not
> all of us would find yourself with a number of unfinished and partially broken options.
please show me the answer to that question. and not using a definitive 'sure' there is no such thing as a level in submitting a proposal because that is the time-frame that for my part the job will take and any proposal with that requirement can be shot down instantly and be a waste of my time. and i stand by my declare that such easy fundamental requirements should be public information.
> Stellar, I have to say.
"Lies, damned lies, and statistics". you understand there's a couple of way to get code into the kernel? how about you use your git-fu to find all of the bugreports/steered fixes that went in as a consequence of us? as for specifically me, Greg explicitly banned me from future contributions by way of af45f32d25cc1 so it's no surprise i do not send patches directly in (and that one commit you found that went in regardless of said ban is definitely a very unhealthy example because it is also the one that Linus censored for no good motive and made me resolve to by no means send security fixes upstream till that apply modifications).
> You now have a business mannequin promoting non-upstream security patches to customers.
now? we have had paid sponsorship for our varied stable kernel collection for 7 years. i wouldn't name it a enterprise mannequin although because it hasn't paid anyone's bills.
> [...]calling into query the earnestness of your try to place them there.
i must be missing something right here but what try? i've by no means in my life tried to submit PaX upstream (for all the explanations discussed already). the CII mails had been exploratory to see how critical that complete organization is about truly securing core infrastructure. in a sense i've obtained my solutions, there's nothing more to the story.
as in your free recommendation, let me reciprocate: complicated problems don't remedy themselves. code fixing advanced problems doesn't write itself. folks writing code fixing advanced problems are few and much between that you'll discover out in brief order. such individuals (area specialists) don't work for free with few exceptions like ourselves. biting the hand that feeds you'll solely finish you up in starvation.
PS: since you are so sure about kernel builders' means to reimplement our code, maybe look at what parallel options i still maintain in PaX despite vanilla having a 'completely-not-reinvented-right here' implementation and check out to understand the rationale. or simply take a look at all of the CVEs that affected say vanilla's ASLR but didn't affect mine.
PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel security is a facet challenge when i'm bored or just waiting for the following kernel to compile (i want LTO was more environment friendly).

Posted Nov 8, 2015 2:28 UTC (Solar) by jejb (subscriber, #6654) [Link]

In other phrases, you tried to define their process for them ... I can not suppose why that would not work.
> "Lies, damned lies, and statistics".
The problem with advert hominem attacks is that they're singularly ineffective in opposition to a transparently factual argument. I posted a one line command anybody may run to get the variety of patches you've got authored in the kernel. Why do not you put up an equivalent that gives figures you like extra?
> i've never in my life tried to submit PaX upstream (for all the reasons mentioned already).
So the grasp plan is to exhibit your expertise by the variety of patches you have not submitted? great plan, world domination beckons, sorry that one obtained away from you, but I am certain you won't let it occur once more.

Posted Nov 8, 2015 2:56 UTC (Solar) by PaXTeam (visitor, #24616) [Hyperlink]

what? since when does asking a query outline anything? isn't that how we discover out what someone else thinks? isn't that what *they* have that webform (never thoughts the mailing lists) for as nicely? in other words you admit that my question was not actually answered .
> The problem with ad hominem attacks is that they are singularly ineffective against a transparently factual argument.
you did not have an argument to begin with, that is what i explained within the half you carefully chose to not quote. i am not here to defend myself towards your clearly idiotic makes an attempt at proving no matter you are making an attempt to prove, as they say even in kernel circles, code speaks, bullshit walks. you possibly can have a look at mine and resolve what i can or can not do (not that you have the knowledge to understand most of it, thoughts you). that stated, there're clearly other more capable folks who've carried out so and decided that my/our work was worth one thing else no one would have been feeding off of it for the previous 15 years and still counting. and as unimaginable as it may seem to you, life would not revolve across the vanilla kernel, not everybody's dying to get their code in there particularly when it means to place up with such silly hostility on lkml that you simply now also demonstrated here (it's ironic how you came to the defense of josh who particularly asked folks to not carry that infamous lkml model right here. good job there James.). as for world domination, there're many ways to realize it and one thing tells me that you are clearly out of your league right here since PaX has already achieved that. you are running such code that implements PaX features as we speak.

Posted Nov 8, 2015 16:52 UTC (Sun) by jejb (subscriber, #6654) [Link]

I posted the one line git script giving your authored patches in response to this original request by you (this one, simply in case you've got forgotten http://lwn.net/Articles/663591/):
> as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not correctly credited)?
I take it, by the way in which you have shifted floor in the earlier threads, that you wish to withdraw that request?

Posted Nov 8, 2015 19:31 UTC (Sun) by PaXTeam (guest, #24616) [Hyperlink]

Posted Nov 8, 2015 22:31 UTC (Solar) by pizza (subscriber, #46) [Link]

Please present one that is not incorrect, or much less improper. It is going to take much less time than you've got already wasted here.

Posted Nov 8, 2015 22:49 UTC (Sun) by PaXTeam (guest, #24616) [Link]

anyway, since it's you guys who've a bee in your bonnet, let's check your stage of intelligence too. first determine my e-mail address and undertaking identify then try to search out the commits that say they arrive from there (it introduced back some reminiscences from 2004 already, how occasions flies! i'm surprised i really managed to perform this much with explicitly not making an attempt, think about if i did :). it's an incredibly complicated job so by accomplishing it you will prove your self to be the highest canine right here on lwn, whatever that's value ;).

Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Link]

*shrug* Or don't; you are only sullying your individual reputation.

Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]

Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Link]

I wouldn't either

Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Hyperlink]

Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (guest, #62367) [Hyperlink]

Posted Nov 8, 2015 3:38 UTC (Sun) by PaXTeam (guest, #24616) [Link]

Posted Nov 12, 2015 13:47 UTC (Thu) by nix (subscriber, #2304) [Hyperlink]

Ah. I believed my memory wasn't failing me. Compare to PaXTeam's response to .
PaXTeam is not averse to outright lying if it means he gets to look right, I see. Maybe PaXTeam's reminiscence is failing, and this apparent contradiction is just not a brazen lie, however given that the 2 posts had been made within a day of one another I doubt it. (PaXTeam's whole unwillingness to assume good faith in others deserves some reflection. Sure, I *do* assume he's lying by implication here, and doing so when there's nearly nothing at stake. God alone knows what he's keen to stoop to when something *is* at stake. Gosh I'm wondering why his fixes aren't going upstream very quick.)

Posted Nov 12, 2015 14:11 UTC (Thu) by PaXTeam (guest, #24616) [Hyperlink]

> and that one commit you found that went in regardless of stated ban
additionally someone's ban does not imply it's going to translate into another person's execution of that ban as it is clear from the commit in question. it's considerably sad that it takes a security repair to expose the fallacy of this policy though. the remainder of your pithy ad hominem speaks for itself better than i ever could ;).

Posted Nov 12, 2015 15:Fifty eight UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]

Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Hyperlink]

I don't see this message in my mailbox, so presumably it bought swallowed.

Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]

You are aware that it is fully possible that everyone seems to be wrong here , proper?
That the kernel maintainers must focus more on security, that the article was biased, that you're irresponsible to decry the state of security, and do nothing to help, and that your patchsets would not help that a lot and are the wrong direction for the kernel? That just because the kernel maintainers aren't 100% right it doesn't suggest you're?

Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (guest, #5770) [Hyperlink]

I think you will have him backwards there. Jon is evaluating this to Mindcraft as a result of he thinks that despite being unpalatable to quite a lot of the community, the article might actually contain a variety of truth.

Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Hyperlink]

Posted Nov 9, 2015 15:13 UTC (Mon) by spender (visitor, #23067) [Link]

"There are rumors of dark forces that drove the article within the hopes of taking Linux down a notch. All of this could effectively be true"
Simply as you criticized the article for mentioning Ashley Madison despite the fact that in the very first sentence of the following paragraph it mentions it didn't contain the Linux kernel, you can't give credence to conspiracy theories with out incurring the identical criticism (in other words, you can't play the Glenn Beck "I'm simply asking the questions here!" whose "questions" fuel the conspiracy theories of others). Much like mentioning Ashley Madison for instance for non-technical readers concerning the prevalence of Linux on this planet, if you are criticizing the point out then mustn't likening a non-FUD article to a FUD article additionally deserve criticism, especially given the rosy, self-congratulatory picture you painted of upstream Linux safety?
As the PaX Group identified in the initial publish, the motivations aren't exhausting to know -- you made no mention at all about it being the 5th in an extended-working collection following a reasonably predictable time trajectory.
No, we didn't miss the general analogy you had been attempting to make, we just don't suppose you may have your cake and eat it too.
-Brad

Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]

Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Link]

It is gracious of you to not blame your readers. I figure they're a fair target: there's that line about those ignorant of historical past being condemned to re-implement Unix -- as your readers are! :-)
K3n.

Posted Nov 9, 2015 18:Forty three UTC (Mon) by bojan (subscriber, #14302) [Link]

Sadly, I don't understand neither the "security" people (PaXTeam/spender), nor the mainstream kernel folks in terms of their attitude. I confess I've completely no technical capabilities on any of these matters, but when all of them determined to work collectively, instead of having endless and pointless flame wars and blame game exchanges, numerous the stuff would have been completed already. And all of the whereas everyone concerned could have made one other large pile of money on the stuff. They all appear to need to have a better Linux kernel, so I've received no concept what the issue is. It seems that nobody is prepared to yield any of their positions even a little bit bit. Instead, each sides seem like bent on trying to insult their means into forcing the other side to give up. Which, after all, by no means works - it simply causes more pushback.
Perplexing stuff...

Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]

Posted Nov 9, 2015 19:44 UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]

Take a scientific computational cluster with an "air gap", as an illustration. You'd most likely need most of the security stuff turned off on it to gain maximum efficiency, because you can belief all users. Now take a few billion mobile phones which may be troublesome or sluggish to patch. You'd probably want to kill many of the exploit courses there, if those units can still run fairly properly with most security features turned on.
So, it is not both/or. It is probably "it depends". However, if the stuff is not there for everyone to compile/use within the vanilla kernel, it is going to be harder to make it a part of everyday choices for distributors and users.

Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Link]

How sad. This Dijkstra quote involves mind immediately:
Software engineering, of course, presents itself as another worthy cause, however that is eyewash: for those who carefully learn its literature and analyse what its devotees truly do, you will uncover that software program engineering has accepted as its charter "The right way to program if you can not."

Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]

I assume that reality was too unpleasant to fit into Dijkstra's world view.

Posted Nov 7, 2015 10:52 UTC (Sat) by ms (subscriber, #41272) [Link]

Certainly. And the interesting factor to me is that after I attain that point, checks should not adequate - model checking at a minimal and actually proofs are the one method forwards. I am no security professional, my discipline is all distributed programs. I perceive and have implemented Paxos and i believe I can clarify how and why it works to anyone. However I'm presently performing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No check is adequate as a result of there are infinite interleavings of events and my head simply couldn't cope with engaged on this either at the pc or on paper - I discovered I could not intuitively reason about these items in any respect. So I began defining the properties and wanted and step by step proving why every of them holds. Without my notes and proofs I am unable to even clarify to myself, not to mention anybody else, why this thing works. I discover this each utterly apparent that this could occur and utterly terrifying - the maintenance cost of these algorithms is now an order of magnitude greater.

Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Hyperlink]

> Certainly. And the attention-grabbing factor to me is that once I attain that point, assessments usually are not adequate - model checking at a minimum and really proofs are the only means forwards.
Or are you just utilizing the improper maths? Hobbyhorse time once more :-) however to quote a fellow Decide developer ... "I usually walk into a SQL development store and see that wall - you already know, the one with the massive SQL schema that no-one totally understands on it - and surprise how I can easily hold the complete schema for a Choose database of the same or larger complexity in my head".
However it's easy - by schooling I'm a Chemist, by curiosity a Physical Chemist (and by career an unemployed programmer :-). And when I'm fascinated about chemistry, I can ask myself "what is an atom product of" and think about issues just like the robust nuclear force. Next degree up, how do atoms stick together and make molecules, and think in regards to the electroweak pressure and electron orbitals, and the way do chemical reactions occur. Then I think about molecules stick together to make supplies, and assume about metals, and/or Van de Waals, and stuff.
Point is, that you must *layer* stuff, and have a look at things, and say "how can I split components off into 'black containers' so at anybody degree I can assume the other ranges 'just work'". For instance, with Decide a FILE (table to you) stores a class - a collection of similar objects. One object per Report (row). And, identical as relational, one attribute per Subject (column). Are you able to map your relational tables to actuality so simply? :-)
Going again THIRTY years, I remember a narrative about a guy who built little pc crabs, that might quite happily scuttle round within the surf zone. Because he did not try to work out how to unravel all the issues without delay - each of his (incredibly puny by right this moment's requirements - that is the 8080/Z80 era!) processors was set to just process slightly bit of the problem and there was no central "brain". But it surely labored ... Maybe you should simply write a bunch of small modules to unravel every particular person problem, and let final answer "simply occur".
Cheers,
Wol

Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (guest, #60862) [Hyperlink]

To my understanding, this is strictly what a mathematical abstraction does. For instance in Z notation we might construct schemas for the assorted modifying ("delta") operations on the base schema, and then argue about preservation of formal invariants, properties of the end result, and transitivity of the operation when chained with itself, or the previous aggregate schema composed of schemas A by O (for which they've been already argued).
The outcome is a set of operations that, executed in arbitrary order, lead to a set of properties holding for the result and outputs. Thus proving the formal design right (w/ caveat lectors concerning scope, correspondence with its implementation [although that may be confirmed as well], and read-solely ["xi"] operations).

Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]

Looking by the history of computing (and possibly loads of other fields too), you may most likely discover that folks "cannot see the wooden for the timber" more typically that not. They dive into the detail and utterly miss the big picture.
(Medicine, and interest of mine, suffers from that too - I remember someone talking in regards to the marketing consultant wanting to amputate a gangrenous leg to save someone's life - oblivious to the truth that the patient was dying of most cancers.)
Cheers,
Wol

Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]

https://www.youtube.com/watch?v=VpuVDfSXs-g
(LCA 2015 - "Programming Thought of Harmful")
FWIW, I believe that this talk could be very relevant to why writing safe software program is so hard..
-Dave.

Posted Nov 7, 2015 5:49 UTC (Sat) by kunitz (subscriber, #3965) [Link]

Whereas we're spending millions at a multitude of safety problems, kernel issues will not be on our high-priority checklist. Actually I remember only as soon as having discussing a kernel vulnerability. The results of the analysis has been that all our techniques had been running kernels that have been older as the kernel that had the vulnerability.
But "patch management" is an actual subject for us. Software program must continue to work if we install security patches or update to new releases due to the tip-of-life coverage of a vendor. The income of the company is depending on the IT programs working. So "not breaking user area" is a security characteristic for us, as a result of a breakage of one element of our a number of ten hundreds of Linux methods will stop the roll-out of the safety replace.
Another drawback is embedded software program or firmware. These days nearly all hardware programs include an operating system, often some Linux model, providing a fill community stack embedded to assist distant management. Commonly those techniques don't survive our obligatory security scan, as a result of vendors still didn't replace the embedded openssl.
The real problem is to offer a software stack that may be operated within the hostile setting of the Internet sustaining full system integrity for ten years and even longer with none buyer maintenance. The current state of software program engineering would require help for an automatic update process, but distributors must perceive that their business mannequin should be able to finance the resources providing the updates.
General I'm optimistic, networked software program will not be the primary expertise utilized by mankind causing problems that have been addressed later. Steam engine use may result in boiler explosions however the "engineers" have been in a position to cut back this danger considerably over a couple of many years.

Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Link]

The following is all guess work; I'd be eager to know if others have proof either a technique or another on this: The people who learn to hack into these methods by way of kernel vulnerabilities know that they skills they've learnt have a market. Thus they do not tend to hack with a purpose to wreak havoc - indeed on the whole where data has been stolen with a purpose to launch and embarrass individuals, it _appears_ as if those hacks are through much simpler vectors. I.e. lesser skilled hackers find there's a complete load of low-hanging fruit which they will get at. They're not being paid ahead of time for the data, in order that they flip to extortion as a substitute. They do not cowl their tracks, and they can often be discovered and charged with criminal offences.
So if your safety meets a certain primary degree of proficiency and/or your company is not doing something that places it near the top of "corporations we would prefer to embarrass" (I suspect the latter is way more practical at retaining techniques "protected" than the previous), then the hackers that get into your system are prone to be skilled, paid, and probably not going to do much harm - they're stealing information for a competitor / state. So that does not hassle your bottom line - at the least not in a manner which your shareholders will remember of. So why fund safety?

Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (guest, #82661) [Hyperlink]

However, some efficient mitigation in kernel stage can be very useful to crush cybercriminal/skiddie's strive. If considered one of your buyer operating a future buying and selling platform exposes some open API to their clients, and if the server has some reminiscence corruption bugs might be exploited remotely. Then you understand there are identified assault methods( equivalent to offset2lib) can assist the attacker make the weaponized exploit a lot simpler. Will you explain the failosophy "A bug is bug" to your customer and tell them it'd be okay? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp.
To probably the most commercial uses, more safety mitigation throughout the software program will not cost you extra budget. You may still need to do the regression take a look at for every upgrade.

Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]

Understand that I concentrate on external internet-primarily based penetration-checks and that in-home checks (local LAN) will probably yield completely different results.

Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Hyperlink]

I keep studying this headline as "a new Minecraft moment", and thinking that possibly they've decided to comply with up the .Web factor by open-sourcing Minecraft. Oh well. I imply, safety is nice too, I guess.

Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]

Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_every (subscriber, #28989) [Link]

Posted Nov 8, 2015 10:34 UTC (Solar) by jcm (subscriber, #18262) [Link]

Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]

Posted Nov 9, 2015 15:53 UTC (Mon) by neiljerram (subscriber, #12005) [Link]

(Oh, and I was additionally nonetheless wondering how Minecraft had taught us about Linux performance - so because of the other remark thread that identified the 'd', not 'e'.)

Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Hyperlink]

I'd just like so as to add that for my part, there's a common drawback with the economics of laptop safety, which is especially seen presently. Two issues even possibly.
First, the cash spent on pc safety is commonly diverted in direction of the so-called safety "circus": fast, simple solutions that are primarily selected just with the intention to "do something" and get higher press. It took me a long time - maybe a long time - to assert that no security mechanism in any respect is healthier than a foul mechanism. However now I firmly imagine in this attitude and would moderately take the chance knowingly (provided that I can save cash/resource for myself) than take a foul approach at solving it (and haven't any money/resource left when i understand I should have carried out one thing else). And i discover there are lots of unhealthy or incomplete approaches at present accessible in the computer safety subject.
Those spilling our rare money/assets on ready-made useless instruments ought to get the bad press they deserve. And, we definitely must enlighten the press on that because it isn't so easy to understand the efficiency of safety mechanisms (which, by definition, should stop things from occurring).
Second, and that may be newer and more worrying. The stream of cash/useful resource is oriented in the direction of attack tools and vulnerabilities discovery a lot greater than in the path of recent protection mechanisms.
This is particularly worrying as cyber "protection" initiatives look increasingly like the same old idustrial initiatives aimed toward producing weapons or intelligence methods. Furthermore, dangerous ineffective weapons, as a result of they are solely working against our very vulnerable current methods; and bad intelligence techniques as even basic college-degree encryption scares them right down to ineffective.
However, all of the ressources are for these adult teenagers playing the white hat hackers with not-so-troublesome programming tips or network monitoring or WWI-level cryptanalysis. And now additionally for the cyberwarriors and cyberspies which have but to show their usefulness totally (particularly for peace protection...).
Personnally, I would happily go away them all the hype; but I am going to forcefully claim that they have no proper in any respect on any of the budget allocation decisions. Solely these engaged on protection ought to. And yep, it means we should always determine the place to place there resources. Now we have to assert the exclusive lock for ourselves this time. (and I assume the PaXteam may very well be among the primary to profit from such a change).
While interested by it, I would not even leave white-hat or cyber-guys any hype in the long run. That's extra publicity than they deserve.
I crave for the day I will learn in the newspaper that: "One other of these ill suggested debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well known virus program code exploiting a programmer mistake and managed nonetheless to carry one of those unfinished and unhealthy quality applications, X, that we are all obliged to use to its knees, annoying hundreds of thousands of normal customers together with his unfortunate cyber-vandalism. All the protection consultants unanimously advocate that, as soon as once more, the funds of the cyber-command be retargetted, or at the least leveled-off, in an effort to convey extra security engineer positions in the academic area or civilian industry. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional in this affair."

Hmmm - cyber-hooligans - I just like the label. Though it does not apply effectively to the battlefield-oriented variant.

Posted Nov 9, 2015 14:28 UTC (Mon) by drag (visitor, #31333) [Link]

The state of 'software security business' is a f-ng disaster. Failure of the best order. There is very large amounts of cash that is going into 'cyber security', however it is often spent on government compliance and audit efforts. This means as an alternative of actually placing effort into correcting issues and mitigating future problems, the majority of the trouble goes into taking present functions and making them conform to committee-pushed guidelines with the minimal amount of effort and modifications.
Some stage of regulation and standardization is totally needed, however lay individuals are clueless and are fully unable to discern the distinction between any person who has worthwhile expertise versus some company that has spent thousands and thousands on slick advertising and marketing and 'native promoting' on massive web sites and pc magazines. The people with the money sadly solely have their own judgment to rely on when buying into 'cyber security'.
> These spilling our rare money/assets on prepared-made useless tools should get the dangerous press they deserve.
There is no such thing as a such thing as 'our uncommon cash/assets'. You've gotten your money, I have mine. Money being spent by some company like Redhat is their money. Money being spent by governments is the government's cash. (you, actually, have much more management in how Walmart spends it is money then over what your government does with their's)
> This is particularly worrying as cyber "defense" initiatives look increasingly like the same old idustrial projects aimed toward producing weapons or intelligence systems. Moreover, bad ineffective weapons, because they're solely working towards our very vulnerable present systems; and bad intelligence methods as even basic school-level encryption scares them right down to useless.
Having secure software program with robust encryption mechanisms in the arms of the general public runs counter to the pursuits of most main governments. Governments, like another for-profit organization, are primarily excited by self-preservation. Cash spent on drone initiatives or banking auditing/oversight regulation compliance is Way more precious to them then attempting to help the public have a secure mechanism for making phone calls. Particularly when these secure mechanisms interfere with information collection efforts.
Sadly you/I/us cannot rely upon some magical benefactor with deep pockets to sweep in and make Linux better. It's just not going to occur.
Companies like Redhat have been massively useful to spending assets to make Linux kernel extra capable.. nonetheless they're driven by a the necessity to show a revenue, which means they need to cater on to the the form of necessities established by their customer base. Customers for EL are usually rather more targeted on decreasing costs related to administration and software growth then security on the low-level OS.
Enterprise Linux prospects are inclined to depend on physical, human coverage, and community safety to guard their 'gentle' interiors from being exposed to exterior threats.. assuming (rightly) that there is very little they'll do to actually harden their programs. Actually when the choice comes between safety vs comfort I am sure that the majority clients will fortunately defeat or strip out any security mechanisms launched into Linux.
On high of that when most Enterprise software is extremely dangerous. A lot so that 10 hours spent on bettering an online entrance-end will yield extra actual-world security benefits then a a thousand hours spent on Linux kernel bugs for most businesses.
Even for 'normal' Linux users a security bug in their Firefox's NAPI flash plugin is far more devastating and poses a massively greater threat then a obscure Linux kernel buffer over stream problem. It's just probably not important for attackers to get 'root' to get entry to the vital information... typically all of which is contained in a single user account.
Ultimately it's as much as people like you and myself to place the effort and money into bettering Linux security. For each ourselves and different individuals.

Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Link]

Spilling has always been the case, but now, to me and in computer safety, most of the cash seems spilled as a consequence of bad religion. And this is generally your cash or mine: either tax-fueled governemental resources or company costs which can be straight reimputed on the prices of products/software program we're advised we are *obliged* to purchase. (Look at company firewalls, residence alarms or antivirus software advertising discourse.)
I think it's time to point out that there are a number of "malicious malefactors" around and that there's a real have to establish and sanction them and confiscate the resources they have someway managed to monopolize. And that i do *not* assume Linus is among such culprits by the way in which. But I feel he could also be amongst the ones hiding their heads in the sand concerning the aforementioned evil actors, whereas he in all probability has extra leverage to counteract them or oblige them to reveal themselves than many of us.
I find that to be of brown-paper-bag degree (though head-in-the-sand is one way or the other a brand new interpretation).
Ultimately, I feel you are proper to say that at present it is solely as much as us individuals to strive actually to do something to enhance Linux or computer safety. However I still suppose that I am right to say that this isn't normal; especially whereas some very critical folks get very serious salaries to distribute randomly some difficult to judge budgets.
[1] A paradoxical state of affairs while you give it some thought: in a domain the place you might be initially preoccupied by malicious individuals everyone should have factual, transparent and sincere conduct as the first priority of their mind.

Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Hyperlink]

It even has a nice, seven line Primary-pseudo-code that describes the present scenario and clearly shows that we are caught in an infinite loop. It doesn't answer the large query, although: How to write down higher software.
The unhappy factor is, that this is from 2005 and all the issues that were obviously silly concepts 10 years ago have proliferated much more.

Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (visitor, #4654) [Hyperlink]

Observe IMHO, we must always examine additional why these dumb issues proliferate and get a lot support.
If it's solely human psychology, effectively, let's struggle it: e.g. Mozilla has shown us that they will do great things given the correct message.
If we are going through energetic people exploiting public credulity: let's identify and battle them.
But, extra importantly, let's capitalize on this data and safe *our* techniques, to exhibit at a minimum (and more later on after all).
Your reference conclusion is especially nice to me. "challenge [...] the conventional wisdom and the status quo": that job I would happily settle for.

Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Hyperlink]

That rant is itself a bunch of "empty calories". The converse to the objects it rants about, which it is suggesting at some stage, could be as dangerous or worse, and indicative of the worst kind of security considering that has put lots of people off. Alternatively, it is just a rant that offers little of worth.
Personally, I think there is not any magic bullet. Safety is and at all times has been, in human history, an arms race between defenders and attackers, and one that's inherently a commerce-off between usability, dangers and prices. If there are mistakes being made, it is that we should always in all probability spend more resources on defences that might block entire lessons of attacks. E.g., why is the GRSec kernel hardening stuff so hard to use to common distros (e.g. there is no dependable source of a GRSec kernel for Fedora or RHEL, is there?). Why does the whole Linux kernel run in a single safety context? Why are we still writing a number of software in C/C++, often with none primary safety-checking abstractions (e.g. primary bounds-checking layers in between I/O and parsing layers, say)? Can hardware do more to supply security with speed?
Little question there are a lot of individuals working on "block lessons of attacks" stuff, the question is, why aren't there more sources directed there?

Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Hyperlink]

>There are a lot of explanation why Linux lags behind in defensive security applied sciences, however one of the important thing ones is that the businesses being profitable on Linux have not prioritized the event and integration of those applied sciences.
Fake root looks as if a motive which is really price exploring. Why is it so?
I believe it's not obvious why this doesn't get some extra attention. Is it possible that the people with the cash are right to not more highly prioritise this? Afterall, what interest do they have in an unsecure, exploitable kernel? The place there is frequent cause, linux growth gets resourced. It's been this manner for many years. If filesystems qualify for widespread curiosity, absolutely safety does. So there doesn't seem to be any obvious motive why this issue doesn't get more mainstream consideration, except that it actually already gets enough. You could say that disaster has not struck yet, that the iceberg has not been hit. But it appears to be that the linux improvement process will not be overly reactive elsewhere.

Posted Nov 10, 2015 15:53 UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]

That's an fascinating question, certainly that's what they actually consider regardless of what they publicly say about their commitment to safety technologies. What's the truly demonstrated draw back for Kernel developers and the organizations that pay them, so far as I can inform there isn't enough consequence for the lack of Safety to drive more funding, so we are left begging and cajoling unconvincingly.

Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (guest, #4654) [Hyperlink]

The key concern with this domain is it relates to malicious faults. So, when consequences manifest themselves, it is simply too late to act. And if the current dedication to a lack of voluntary strategy persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia.
Admittedly, kernel developpers seem pretty resistant to paranoia. That is a good thing. But I'm waiting for the days where armed land-drones patrol US streets in the vicinity of their children colleges for them to discover the feeling. They are not so distants the days when innocent lives will unconsciouly rely on the safety of (linux-based) computer systems; beneath water, that is already the case if I remember correctly my last dive, as well as in several recent cars in keeping with some reviews.

Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Hyperlink]

Basic internet hosting companies that use Linux as an uncovered entrance-end system are retreating from growth whereas HPC, mobile and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel of their directions.
This is de facto not that shocking: For internet hosting needs the kernel has been "finished" for fairly some time now. In addition to assist for present hardware there is just not a lot use for newer kernels. Linux 3.2, and even older, works just superb.
Hosting does not need scalability to lots of or hundreds of CPU cores (one uses commodity hardware), complex instrumentation like perf or tracing (techniques are locked down as a lot as attainable) or superior energy-administration (if the system doesn't have constant excessive load, it's not making enough money). So why ought to hosting corporations nonetheless make sturdy investments in kernel development? Even when they'd something to contribute, the hurdles for contribution have become greater and better.
For his or her security wants, internet hosting corporations already use Grsecurity. I haven't any numbers, but some expertise means that Grsecurity is mainly a fixed requirement for shared internet hosting.
Alternatively, kernel safety is nearly irrelevant on nodes of an excellent computer or on a system operating massive business databases which can be wrapped in layers of middle-ware. And cell vendors simply don't care.

Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Link]

Linking

Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Hyperlink]

Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]

The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am positive the system's hard drives had been despatched off for forensic examination, and we have all been waiting patiently for the answer to the most important question: What was the compromise vector? From shortly after the compromise was found on August 28, 2011, right by means of April 1st, 2013, kernel.org included this observe at the highest of the positioning Information: 'Due to all to your endurance and understanding during our outage and please bear with us as we convey up the totally different kernel.org programs over the subsequent few weeks. We will probably be writing up a report on the incident in the future.' (Emphasis added.) That remark was removed (along with the rest of the positioning News) during a May 2013 edit, and there hasn't been -- to my knowledge -- a peep about any report on the incident since then. This has been disappointing. When the Debian Undertaking discovered sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted a wonderful public report on exactly what happened. Likewise, the Apache Basis likewise did the correct thing with good public autopsies of the 2010 Net site breaches. Arstechnica's Dan Goodin was still making an attempt to comply with up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years ago. He wrote: Linux developer and maintainer Greg Kroah-Hartman informed Ars that the investigation has but to be accomplished and gave no timetable for when a report might be released. [...] Kroah-Hartman additionally advised Ars kernel.org methods had been rebuilt from scratch following the attack. Officials have developed new tools and procedures since then, however he declined to say what they are. "There will likely be a report later this year about site [sic] has been engineered, however don't quote me on when it will likely be launched as I am not accountable for it," he wrote.
Who's accountable, then? Is anyone? Anyone? Bueller? Or is it a state secret, or what? Two years since Greg Ok-H stated there would be a report 'later this yr', and four years for the reason that meltdown, nothing but. How about some information? Rick Moen
[email protected]

Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (visitor, #4654) [Link]

Much less severely, observe that if even the Linux mafia doesn't know, it should be the venusians; they are notoriously stealth in their invasions.

Posted Nov 14, 2015 12:Forty six UTC (Sat) by error27 (subscriber, #8346) [Link]

I do know the kernel.org admins have given talks about a few of the new protections that have been put into place. There aren't any extra shell logins, as a substitute all the things uses gitolite. The different companies are on completely different hosts. There are extra kernel.org employees now. Individuals are utilizing two factor identification. Another stuff. Do a search for Konstantin Ryabitsev.

Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Hyperlink]

I beg your pardon if I used to be somehow unclear: That was said to have been the trail of entry to the machine (and that i can readily believe that, as it was additionally the exact path to entry into shells.sourceforge.web, many years prior, round 2002, and into many other shared Web hosts for a few years). But that is not what's of main interest, and isn't what the forensic study lengthy promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator in the August 2011 Dan Goodin article you cited: 'How they managed to exploit that to root access is presently unknown and is being investigated'. Okay, people, you have now had four years of investigation. What was the trail of escalation to root? (Also, different particulars that may logically be covered by a forensic examine, equivalent to: Whose key was stolen? Who stole the important thing?) That is the type of autopsy was promised prominently on the entrance web page of kernel.org, to reporters, and elsewhere for a long time (after which summarily removed as a promise from the entrance web page of kernel.org, with out remark, together with the rest of the location Information part, and apparently dropped). It still can be appropriate to know and share that information. Particularly the datum of whether or not the trail to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen
[email protected]

Posted Nov 22, 2015 12:Forty two UTC (Solar) by rickmoen (subscriber, #6943) [Link]

I've performed a closer evaluation of revelations that came out quickly after the break-in, and think I've found the answer, via a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell users (two days earlier than the general public was knowledgeable), plus Aug. Thirty first feedback to The Register's Dan Goodin by 'two safety researchers who were briefed on the breach': Root escalation was via exploit of a Linux kernel safety gap: Per the two security researchers, it was one both extremely embarrassing (huge-open access to /dev/mem contents together with the operating kernel's picture in RAM, in 2.6 kernels of that day) and recognized-exploitable for the prior six years by canned 'sploits, one in all which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Other tidbits: - Site admins left the foundation-compromised Web servers working with all providers still lit up, for a number of days. - Site admins and Linux Foundation sat on the data and failed to tell the public for those self same a number of days. - Site admins and Linux Foundation have never revealed whether trojaned Linux source tarballs were posted within the http/ftp tree for the 19+ days earlier than they took the site down. (Yes, git checkout was positive, however what about the 1000's of tarball downloads?) - After promising a report for several years after which quietly eradicating that promise from the front web page of kernel.org, Linux Basis now stonewalls press queries.
I posted my greatest try at reconstructing the story, absent an actual report from insiders, to SVLUG's foremost mailing list yesterday. (Necessarily, there are surmises. If the individuals with the details have been more forthcoming, we would know what occurred for sure.) I do must marvel: If there's another embarrassing screwup, will we even be informed about it at all? Rick Moen
[email protected]

Posted Nov 22, 2015 14:25 UTC (Sun) by spender (guest, #23067) [Link]

Also, it is preferable to use stay reminiscence acquisition previous to powering off the system, otherwise you lose out on memory-resident artifacts that you would be able to carry out forensics on.
-Brad

How in regards to the lengthy overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Hyperlink]

Thanks for your comments, Brad. I might been relying on Dan Goodin's claim of Phalanx being what was used to gain root, within the bit where he cited 'two safety researchers who were briefed on the breach' to that effect. Goodin additionally elaborated: 'Fellow safety researcher Dan Rosenberg mentioned he was additionally briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the primary time I've heard of a rootkit being claimed to be bundled with an attack device, and i noted that oddity in my posting to SVLUG. That having been said, yeah, the Phalanx README doesn't specifically claim this, so then perhaps Goodin and his several 'safety researcher' sources blew that element, and nobody but kernel.org insiders but knows the escalation path used to achieve root. Also, it's preferable to use dwell memory acquisition prior to powering off the system, in any other case you lose out on memory-resident artifacts that you would be able to carry out forensics on.
Arguable, but a tradeoff; you can poke the compromised stay system for state knowledge, however with the drawback of leaving your system working underneath hostile management. I used to be at all times taught that, on steadiness, it's higher to drag energy to finish the intrusion. Rick Moen
[email protected]

Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (visitor, #88005) [Link]

Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Link]

With "something" you imply those who produce those closed supply drivers, proper?
If the "consumer product companies" just stuck to using components with mainlined open source drivers, then updating their products would be much easier.

A new Mindcraft second?

Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]

They have ring zero privilege, can access protected reminiscence directly, and can't be audited. Trick a kernel into running a compromised module and it's recreation over.
Even tickle a bug in a "good" module, and it's probably sport over - in this case quite literally as such modules are usually video drivers optimised for video games ...

Homepage: https://fakeroot.net/
     
 
what is notes.io
 

Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...

With notes.io;

  • * You can take a note from anywhere and any device with internet connection.
  • * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
  • * You can quickly share your contents without website, blog and e-mail.
  • * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
  • * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.

Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.

Easy: Notes.io doesn’t require installation. Just write and share note!

Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )

Free: Notes.io works for 12 years and has been free since the day it was started.


You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;


Email: [email protected]

Twitter: http://twitter.com/notesio

Instagram: http://instagram.com/notes.io

Facebook: http://facebook.com/notesio



Regards;
Notes.io Team

     
 
Shortened Note Link
 
 
Looding Image
 
     
 
Long File
 
 

For written notes was greater than 18KB Unable to shorten.

To be smaller than 18KB, please organize your notes, or sign in.