Online note taking app - Notes.ionotes

Fast | Easy | Short

Online Note Services - notes.io

A New Mindcraft Second?
Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (visitor, #24616) [Link]

1. this WP article was the 5th in a sequence of articles following the security of the internet from its beginnings to related matters of at this time. discussing the security of linux (or lack thereof) fits properly in there. it was also a well-researched article with over two months of analysis and interviews, something you can't fairly claim yourself on your recent items on the subject. you do not like the info? then say so. or even higher, do one thing constructive about them like Kees and others have been attempting. nevertheless silly comparisons to old crap just like the Mindcraft studies and fueling conspiracies don't precisely help your case.
2. "We do an inexpensive job of discovering and fixing bugs."
let's begin here. is that this statement based mostly on wishful pondering or cold onerous information you are going to share in your response? based on Kees, the lifetime of security bugs is measured in years. that's greater than the lifetime of many gadgets individuals buy and use and ditch in that interval.
3. "Issues, whether or not they're security-related or not, are patched shortly,"
some are, some aren't: let's not neglect the recent NMI fixes that took over 2 months to trickle right down to stable kernels and we even have a person who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-systems.btrfs/49500 (FYI, the overflow plugin is the first one Kees is making an attempt to upstream, think about the shitstorm if bugreports might be treated with this angle, let's hope btrfs guys are an exception, not the rule). anyway, two examples usually are not statistics, so once again, do you could have numbers or is it all wishful considering? (it is partly a trick question because you may even have to clarify how one thing gets to be determined to be security associated which as we all know is a messy business within the linux world)
4. "and the stable-replace mechanism makes those patches out there to kernel customers."
except when it does not. and yes, i've numbers: grsec carries 200+ backported patches in our 3.14 stable tree.
5. "Specifically, the few developers who're working in this area have never made a critical try and get that work integrated upstream."
you do not must be shy about naming us, in spite of everything you probably did so elsewhere already. and we also explained the the explanation why we have not pursued upstreaming our code: https://lwn.net/Articles/538600/ . since i do not expect you and your readers to read any of it, here's the tl;dr: if you need us to spend hundreds of hours of our time to upstream our code, you'll have to pay for it. no ifs no buts, that's how the world works, that's how >90% of linux code will get in too. i personally find it pretty hypocritic that effectively paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter without cost. and earlier than somebody brings up the CII, go test their mail archives, after some preliminary exploratory discussions i explicitly requested them about supporting this lengthy drawn out upstreaming work and got no answers.

Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Link]

Money (aha) quote :
> I propose you spend none of your free time on this. Zero. I suggest you receives a commission to do that. And effectively.
Nobody anticipate you to serve your code on a silver platter free of charge. The Linux foundation and massive firms utilizing Linux (Google, Purple Hat, Oracle, Samsung, etc.) should pay safety specialists like you to upstream your patchs.

Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Link]

I'd simply like to level out that the way in which you phrased this makes your comment a tone argument[1][2]; you have (probably unintentionally) dismissed all the parent's arguments by pointing at its presentation. The tone of PAXTeam's remark shows the frustration built up through the years with the way in which issues work which I think needs to be taken at face value, empathized with, and understood relatively than merely dismissed.
1. http://rationalwiki.org/wiki/Tone_argument
2. http://geekfeminism.wikia.com/wiki/Tone_argument
Cheers,

Posted Nov 7, 2015 0:Fifty five UTC (Sat) by josh (subscriber, #17465) [Hyperlink]

Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]

why, is upstream recognized for its primary civility and decency? have you ever even learn the WP post under discussion, by no means thoughts past lkml visitors?

Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Link]

Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (visitor, #58961) [Hyperlink]

No Argument

Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Link]

Please don't; it doesn't belong there both, and it especially does not want a cheering section because the tech press (LWN usually excepted) tends to offer.

Posted Nov 8, 2015 8:36 UTC (Sun) by gmatht (visitor, #58961) [Hyperlink]

Ok, however I used to be considering of Linus Torvalds

Posted Nov 8, 2015 16:11 UTC (Sun) by pbonzini (subscriber, #60935) [Hyperlink]

Posted Nov 6, 2015 22:Forty three UTC (Fri) by PaXTeam (visitor, #24616) [Hyperlink]

Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]

Why should you assume only cash will fix this problem? Yes, I agree extra sources should be spent on fixing Linux kernel safety points, but don't assume somebody giving a company (ahem, PAXTeam) cash is the only resolution. (Not mean to impugn PAXTeam's security efforts.)

The Linux improvement neighborhood may have had the wool pulled over its collective eyes with respect to security issues (both actual or perceived), but merely throwing cash at the issue will not fix this.

And yes, I do notice the industrial Linux distros do lots (most?) of the kernel development these days, and that implies oblique financial transactions, but it's much more concerned than simply that.

Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]

Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Hyperlink]

Posted Nov 7, 2015 9:49 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]

Posted Nov 6, 2015 23:Thirteen UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]

I feel you definitely agree with the gist of Jon's argument... not sufficient focus has been given to safety in the Linux kernel... the article will get that part right... cash hasn't been going towards safety... and now it must. Aren't you glad?

Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]

they talked to spender, not me personally, but yes, this side of the coin is nicely represented by us and others who had been interviewed. the same method Linus is a good representative of, properly, his own pet mission known as linux.
> And if Jon had only talked to you, his would have been too.
on condition that i'm the writer of PaX (a part of grsec) sure, talking to me about grsec issues makes it probably the greatest methods to analysis it. but when you know of someone else, be my guest and name them, i'm fairly positive the lately formed kernel self-protection of us could be dying to interact them (or not, i don't think there is a sucker on the market with hundreds of hours of free time on their hand).
> [...]it also contained quite a couple of of groan-worthy statements.
nothing is perfect but contemplating the viewers of the WP, that is one of the higher journalistic pieces on the topic, regardless of how you and others do not like the sorry state of linux safety exposed in there. if you'd like to debate extra technical particulars, nothing stops you from speaking to us ;).
speaking of your complaints about journalistic qualities, since a previous LWN article saw it match to incorporate a number of typical dismissive claims by Linus about the standard of unspecified grsec features with no proof of what expertise he had with the code and how current it was, how come we did not see you or anyone else complaining about the quality of that article?
> Aren't you glad?
no, or not but anyway. i've heard lots of empty phrases over the years and nothing ever manifested or worse, all the money has gone to the pointless train of fixing particular person bugs and related circus (that Linus rightfully despises FWIW).

Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Hyperlink]

Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Hyperlink]

Right now we've acquired builders from massive names saying that doing all that the Linux ecosystem does *safely* is an itch that they have. Sadly, the surrounding cultural attitude of builders is to hit useful goals, and occasionally efficiency objectives. Safety objectives are often neglected. Ideally, the culture would shift in order that we make it troublesome to observe insecure habits, patterns or paradigms -- that is a process that may take a sustained effort, not merely the upstreaming of patches.
Regardless of the culture, these patches will go upstream ultimately anyway because the concepts that they embody at the moment are timely. I can see a strategy to make it happen: Linus will settle for them when a big finish-person (say, Intel, Google, Facebook or Amazon) delivers stuff with notes like 'here's a set of improvements, we're already using them to resolve this kind of downside, here is how all the things will remain working as a result of $evidence, word fastidiously that you are staring down the barrels of a fork because your tree is now evolutionarily disadvantaged'. It is a recreation and can be gamed; I might favor that the neighborhood shepherds users to observe the sample of declaring downside + resolution + practical take a look at proof + efficiency test proof + security take a look at evidence.
K3n.

Posted Nov 9, 2015 6:Forty nine UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]

And about that fork barrel: I might argue it is the other method round. Google forked and lost already.

Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (guest, #99377) [Link]

Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (guest, #33164) [Link]

Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Link]

So I have to confess to a specific amount of confusion. I might swear that the article I wrote stated precisely that, but you've got put a fair quantity of effort into flaming it...?

Posted Nov 8, 2015 1:34 UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]

Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Link]

I personally think you and Nick Krause share reverse sides of the identical coin. Programming means and basic civility.

Posted Nov 6, 2015 22:59 UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]

Posted Nov 7, 2015 0:16 UTC (Sat) by rahvin (visitor, #16953) [Hyperlink]

I hope I am wrong, however a hostile perspective is not going to help anybody get paid. It is a time like this where something you appear to be an "professional" at and there is a demand for that experience the place you display cooperation and willingness to participate as a result of it is an opportunity. I'm comparatively shocked that someone doesn't get that, but I am older and have seen just a few of these alternatives in my career and exploited the hell out of them. You only get just a few of those in the average career, and handful at probably the most.
Generally you must spend money on proving your abilities, and this is a type of moments. It appears the Kernel neighborhood could finally take this security lesson to heart and embrace it, as said in the article as a "mindcraft moment". This is an opportunity for developers that may wish to work on Linux safety. Some will exploit the opportunity and others will thumb their noses at it. In the long run those developers that exploit the chance will prosper from it.
I really feel outdated even having to write down that.

Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]

Maybe there's a rooster and egg problem right here, however when in search of out and funding individuals to get code upstream, it helps to select people and teams with a historical past of being able to get code upstream.
It's perfectly cheap to prefer understanding of tree, offering the ability to develop spectacular and demanding security advances unconstrained by upstream necessities. That's work somebody might also want to fund, if that meets their wants.

Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]

Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]

You make this argument (implying you do research and Josh doesn't) and then fail to assist it by any cite. It would be far more convincing in case you surrender on the Onus probandi rhetorical fallacy and truly cite info.
> working example, it was *them* who advised that they wouldn't fund out-of-tree work however would consider funding upstreaming work, besides when pressed for the small print, all i received was silence.
For those following alongside at home, that is the related set of threads:
http://lists.coreinfrastructure.org/pipermail/cii-talk about...
A fast precis is that they advised you your venture was unhealthy as a result of the code was by no means going upstream. You informed them it was because of kernel developers perspective so they should fund you anyway. They advised you to submit a grant proposal, you whined more about the kernel attitudes and ultimately even your apologist told you that submitting a proposal may be the neatest thing to do. At that point you went silent, not vice versa as you indicate above.
> obviously i won't spend time to put in writing up a begging proposal just to be advised that 'no sorry, we don't fund multi-year projects in any respect'. that is something that one should be informed in advance (or heck, be a part of some public guidelines so that others will know the rules too).
You seem to have a fatally flawed grasp of how public funding works. If you do not inform individuals why you need the money and the way you will spend it, they're unlikely to disburse. Saying I am good and I do know the issue now hand over the cash would not even work for many Teachers who have a solid status in the field; which is why most of them spend >30% of their time writing grant proposals.
> as for getting code upstream, how about you examine the kernel git logs (minus the stuff that was not properly credited)?
[email protected]> git log|grep -i 'Creator: pax.*group'|wc -l
1
Stellar, I need to say. And before you gentle off on these who've misappropriated your credit score, please keep in mind that getting code upstream on behalf of reluctant or incapable actors is a hugely priceless and time consuming talent and one in all the explanations teams like Linaro exist and are effectively funded. If extra of your stuff does go upstream, will probably be because of the not inconsiderable efforts of different individuals in this area.
You now have a business mannequin selling non-upstream safety patches to prospects. There's nothing mistaken with that, it's a reasonably ordinary first stage business model, nevertheless it does fairly depend upon patches not being upstream in the first place, calling into query the earnestness of your attempt to place them there.
Now here's some free recommendation in my area, which is helping firms align their companies in open source: The promoting out of tree patch route is all the time an eventual failure, particularly with the kernel, as a result of if the performance is that helpful, it will get upstreamed or reinvented in your regardless of, leaving you with nothing to sell. In case your business plan B is selling expertise, you have got to bear in mind that it should be a tough promote when you've got no out of tree differentiator left and git historical past denies that you simply had something to do with the in-tree patches. In actual fact "loopy security particular person" will turn into a self fulfilling prophecy. The advice? it was obvious to everybody else who learn this, however for you, it is do the upstreaming yourself earlier than it gets done for you. That manner you will have a official historic declare to Plan B and also you may also have a Plan A selling a rollup of upstream observe patches integrated and delivered before the distributions get round to it. Even your software to the CII could not be dismissed because your work wasn't going anywhere. Your alternative is to proceed playing the role of Cassandra and doubtless suffer her eventual destiny.

Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]

> Second, for the doubtlessly viable pieces this could be a multi-year
> full time job. Is the CII keen to fund tasks at that level? If not
> all of us would end up with lots of unfinished and partially damaged options.
please present me the answer to that question. without a definitive 'sure' there isn't any point in submitting a proposal because that is the time-frame that for my part the job will take and any proposal with that requirement would be shot down instantly and be a waste of my time. and that i stand by my declare that such simple fundamental requirements should be public data.
> Stellar, I must say.
"Lies, damned lies, and statistics". you realize there's multiple approach to get code into the kernel? how about you use your git-fu to find all the bugreports/prompt fixes that went in on account of us? as for particularly me, Greg explicitly banned me from future contributions through af45f32d25cc1 so it's no wonder i do not ship patches directly in (and that one commit you discovered that went in regardless of said ban is definitely a very dangerous instance because it is also the one which Linus censored for no good reason and made me decide to by no means send security fixes upstream till that follow changes).
> You now have a enterprise model promoting non-upstream safety patches to customers.
now? we've had paid sponsorship for our various stable kernel collection for 7 years. i would not call it a enterprise model although because it hasn't paid anybody's bills.
> [...]calling into question the earnestness of your try to put them there.
i have to be missing one thing right here however what attempt? i've by no means in my life tried to submit PaX upstream (for all the explanations mentioned already). the CII mails were exploratory to see how severe that complete organization is about actually securing core infrastructure. in a way i've received my solutions, there's nothing extra to the story.
as for your free advice, let me reciprocate: advanced issues don't clear up themselves. code solving complex problems does not write itself. individuals writing code solving complex issues are few and far between that you will discover out in short order. such individuals (domain experts) do not work without cost with few exceptions like ourselves. biting the hand that feeds you will solely finish you up in starvation.
PS: since you're so positive about kernel developers' means to reimplement our code, possibly have a look at what parallel features i still maintain in PaX regardless of vanilla having a 'completely-not-reinvented-here' implementation and take a look at to understand the reason. or just take a look at all the CVEs that affected say vanilla's ASLR however didn't have an effect on mine.
PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel security is a facet venture when i'm bored or simply ready for the next kernel to compile (i want LTO was more environment friendly).

Posted Nov 8, 2015 2:28 UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]

In different words, you tried to outline their process for them ... I can not assume why that wouldn't work.
> "Lies, damned lies, and statistics".
The problem with ad hominem assaults is that they are singularly ineffective in opposition to a transparently factual argument. I posted a one line command anybody could run to get the variety of patches you've authored within the kernel. Why don't you post an equal that provides figures you want extra?
> i've never in my life tried to submit PaX upstream (for all the explanations discussed already).
So the grasp plan is to display your experience by the number of patches you have not submitted? nice plan, world domination beckons, sorry that one bought away from you, but I'm sure you will not let it happen again.

Posted Nov 8, 2015 2:Fifty six UTC (Sun) by PaXTeam (visitor, #24616) [Hyperlink]

what? since when does asking a query outline anything? isn't that how we discover out what another person thinks? isn't that what *they* have that webform (by no means mind the mailing lists) for as properly? in other phrases you admit that my query was not really answered .
> The problem with advert hominem assaults is that they are singularly ineffective towards a transparently factual argument.
you didn't have an argument to begin with, that's what i defined in the part you rigorously selected to not quote. i am not here to defend myself towards your clearly idiotic attempts at proving no matter you're attempting to show, as they say even in kernel circles, code speaks, bullshit walks. you'll be able to take a look at mine and decide what i can or cannot do (not that you've got the data to grasp most of it, thoughts you). that mentioned, there're clearly other more capable folks who've accomplished so and decided that my/our work was worth something else nobody would have been feeding off of it for the previous 15 years and still counting. and as unimaginable as it might appear to you, life doesn't revolve around the vanilla kernel, not everyone's dying to get their code in there particularly when it means to put up with such silly hostility on lkml that you now also demonstrated here (it's ironic how you got here to the protection of josh who particularly asked folks not to convey that notorious lkml fashion here. good job there James.). as for world domination, there're many ways to attain it and one thing tells me that you're clearly out of your league here since PaX has already achieved that. you are running such code that implements PaX options as we converse.

Posted Nov 8, 2015 16:Fifty two UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]

I posted the one line git script giving your authored patches in response to this original request by you (this one, simply in case you've got forgotten http://lwn.net/Articles/663591/):
> as for getting code upstream, how about you test the kernel git logs (minus the stuff that was not properly credited)?
I take it, by the best way you've shifted floor within the earlier threads, that you want to withdraw that request?

Posted Nov 8, 2015 19:31 UTC (Sun) by PaXTeam (guest, #24616) [Hyperlink]

Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Link]

Please present one that is not fallacious, or much less flawed. It should take much less time than you've got already wasted right here.

Posted Nov 8, 2015 22:Forty nine UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]

anyway, since it is you guys who've a bee in your bonnet, let's test your level of intelligence too. first determine my email handle and project identify then try to seek out the commits that say they come from there (it introduced again some memories from 2004 already, how instances flies! i am shocked i truly managed to accomplish this much with explicitly not making an attempt, imagine if i did :). it is an incredibly complicated job so by conducting it you may show your self to be the top canine right here on lwn, no matter that is worth ;).

Posted Nov 8, 2015 23:25 UTC (Sun) by pizza (subscriber, #46) [Hyperlink]

*shrug* Or don't; you are solely sullying your personal reputation.

Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (guest, #33164) [Link]

Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Link]

I would not either

Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Link]

Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (visitor, #62367) [Link]

Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]

Posted Nov 12, 2015 13:Forty seven UTC (Thu) by nix (subscriber, #2304) [Link]

Ah. I thought my memory wasn't failing me. Examine to PaXTeam's response to .
PaXTeam isn't averse to outright lying if it means he gets to seem proper, I see. Possibly PaXTeam's memory is failing, and this apparent contradiction is just not a brazen lie, however on condition that the 2 posts were made within a day of each other I doubt it. (PaXTeam's whole unwillingness to assume good faith in others deserves some reflection. Yes, I *do* think he is mendacity by implication here, and doing so when there's virtually nothing at stake. God alone is aware of what he is keen to stoop to when something *is* at stake. Gosh I ponder why his fixes aren't going upstream very quick.)

Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (guest, #24616) [Link]

> and that one commit you discovered that went in despite said ban
also somebody's ban doesn't suggest it will translate into someone else's execution of that ban as it is clear from the commit in query. it's considerably unhappy that it takes a safety fix to expose the fallacy of this coverage though. the remainder of your pithy advert hominem speaks for itself better than i ever might ;).

Posted Nov 12, 2015 15:Fifty eight UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]

Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (guest, #67268) [Hyperlink]

I do not see this message in my mailbox, so presumably it bought swallowed.

Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]

You might be aware that it's completely potential that everyone seems to be fallacious here , proper?
That the kernel maintainers have to focus extra on security, that the article was biased, that you're irresponsible to decry the state of security, and do nothing to help, and that your patchsets wouldn't assist that a lot and are the mistaken direction for the kernel? That simply because the kernel maintainers aren't 100% right it does not imply you might be?

Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (visitor, #5770) [Hyperlink]

I think you could have him backwards there. Jon is evaluating this to Mindcraft because he thinks that regardless of being unpalatable to a whole lot of the group, the article would possibly in reality include a variety of fact.

Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Hyperlink]

Posted Nov 9, 2015 15:Thirteen UTC (Mon) by spender (visitor, #23067) [Link]

"There are rumors of darkish forces that drove the article in the hopes of taking Linux down a notch. All of this could nicely be true"
Just as you criticized the article for mentioning Ashley Madison regardless that within the very first sentence of the following paragraph it mentions it didn't involve the Linux kernel, you cannot give credence to conspiracy theories with out incurring the identical criticism (in different phrases, you can't play the Glenn Beck "I'm just asking the questions here!" whose "questions" gas the conspiracy theories of others). Very similar to mentioning Ashley Madison for example for non-technical readers in regards to the prevalence of Linux in the world, if you are criticizing the mention then should not likening a non-FUD article to a FUD article additionally deserve criticism, particularly given the rosy, self-congratulatory picture you painted of upstream Linux security?
As the PaX Workforce identified within the initial submit, the motivations aren't exhausting to know -- you made no mention in any respect about it being the 5th in a long-operating collection following a pretty predictable time trajectory.
No, we did not miss the general analogy you have been attempting to make, we simply do not assume you can have your cake and eat it too.
-Brad

Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]

Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]

It's gracious of you not to blame your readers. I determine they're a fair goal: there's that line about these ignorant of historical past being condemned to re-implement Unix -- as your readers are! :-)
K3n.

Posted Nov 9, 2015 18:43 UTC (Mon) by bojan (subscriber, #14302) [Link]

Unfortunately, I don't understand neither the "security" people (PaXTeam/spender), nor the mainstream kernel people by way of their perspective. I confess I have completely no technical capabilities on any of those subjects, but if they all determined to work collectively, as a substitute of having limitless and pointless flame wars and blame recreation exchanges, quite a lot of the stuff would have been finished already. And all the while everyone involved might have made one other huge pile of money on the stuff. They all seem to want to have a greater Linux kernel, so I've obtained no concept what the problem is. Plainly nobody is prepared to yield any of their positions even a little bit bit. As an alternative, each sides look like bent on trying to insult their approach into forcing the opposite facet to quit. Which, after all, never works - it just causes extra pushback.
Perplexing stuff...

Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]

Posted Nov 9, 2015 19:44 UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]

Take a scientific computational cluster with an "air hole", for example. You'd probably want most of the safety stuff turned off on it to achieve maximum efficiency, because you may trust all users. Now take a couple of billion mobile phones that may be difficult or sluggish to patch. You'd in all probability wish to kill lots of the exploit courses there, if those devices can nonetheless run reasonably effectively with most safety features turned on.
So, it's not either/or. It is probably "it relies upon". However, if the stuff is not there for everyone to compile/use in the vanilla kernel, will probably be more difficult to make it a part of everyday choices for distributors and users.

Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Link]

How sad. This Dijkstra quote involves thoughts instantly:
Software program engineering, after all, presents itself as one other worthy trigger, however that's eyewash: in the event you fastidiously read its literature and analyse what its devotees truly do, you'll discover that software program engineering has accepted as its charter "How one can program if you can not."

Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]

I suppose that truth was too unpleasant to fit into Dijkstra's world view.

Posted Nov 7, 2015 10:52 UTC (Sat) by ms (subscriber, #41272) [Link]

Certainly. And the interesting thing to me is that once I reach that point, checks will not be ample - model checking at a minimal and really proofs are the only manner forwards. I'm no security knowledgeable, my discipline is all distributed techniques. I perceive and have carried out Paxos and i believe I can clarify how and why it really works to anybody. However I'm at present doing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No check is sufficient as a result of there are infinite interleavings of events and my head just could not cope with engaged on this either at the computer or on paper - I discovered I could not intuitively cause about this stuff in any respect. So I started defining the properties and wanted and step by step proving why each of them holds. Without my notes and proofs I can't even explain to myself, let alone anybody else, why this thing works. I find this both utterly obvious that this will occur and completely terrifying - the maintenance price of these algorithms is now an order of magnitude greater.

Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Link]

> Certainly. And the fascinating factor to me is that once I attain that time, tests should not sufficient - mannequin checking at a minimum and really proofs are the one way forwards.
Or are you just utilizing the unsuitable maths? Hobbyhorse time once more :-) but to quote a fellow Pick developer ... "I typically walk into a SQL development store and see that wall - you recognize, the one with the large SQL schema that no-one totally understands on it - and surprise how I can easily hold the complete schema for a Decide database of the identical or higher complexity in my head".
However it is simple - by training I'm a Chemist, by curiosity a Physical Chemist (and by occupation an unemployed programmer :-). And when I'm excited about chemistry, I can ask myself "what's an atom manufactured from" and think about things just like the strong nuclear pressure. Next stage up, how do atoms stick collectively and make molecules, and assume concerning the electroweak drive and electron orbitals, and the way do chemical reactions happen. Then I think about molecules stick together to make materials, and suppose about metals, and/or Van de Waals, and stuff.
Level is, that you must *layer* stuff, and look at issues, and say "how can I split components off into 'black bins' so at anyone stage I can assume the opposite levels 'simply work'". For instance, with Choose a FILE (table to you) stores a category - a set of equivalent objects. One object per Record (row). And, identical as relational, one attribute per Area (column). Are you able to map your relational tables to actuality so simply? :-)
Going again THIRTY years, I remember a narrative about a man who constructed little laptop crabs, that might quite happily scuttle round in the surf zone. Because he did not try to work out how to solve all the problems at once - each of his (incredibly puny by at present's standards - this is the 8080/Z80 period!) processors was set to just course of somewhat bit of the problem and there was no central "brain". However it labored ... Maybe you need to simply write a bunch of small modules to solve every particular person downside, and let final answer "simply occur".
Cheers,
Wol

Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (visitor, #60862) [Link]

To my understanding, this is precisely what a mathematical abstraction does. For example in Z notation we would construct schemas for the varied modifying ("delta") operations on the bottom schema, and then argue about preservation of formal invariants, properties of the outcome, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A via O (for which they've been already argued).
The end result is a set of operations that, executed in arbitrary order, result in a set of properties holding for the end result and outputs. Thus proving the formal design right (w/ caveat lectors concerning scope, correspondence with its implementation [though that may be proven as well], and browse-only ["xi"] operations).

Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]

Looking by way of the history of computing (and doubtless plenty of other fields too), you may probably discover that folks "cannot see the wooden for the timber" more typically that not. They dive into the detail and completely miss the massive image.
(Medicine, and curiosity of mine, suffers from that too - I remember someone speaking about the guide eager to amputate a gangrenous leg to save lots of someone's life - oblivious to the truth that the affected person was dying of cancer.)
Cheers,
Wol

Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]

https://www.youtube.com/watch?v=VpuVDfSXs-g
(LCA 2015 - "Programming Considered Dangerous")
FWIW, I believe that this discuss may be very related to why writing safe software is so onerous..
-Dave.

Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Hyperlink]

Whereas we are spending thousands and thousands at a multitude of security problems, kernel points usually are not on our high-precedence listing. Actually I remember only as soon as having discussing a kernel vulnerability. The result of the evaluation has been that every one our techniques were operating kernels that were older because the kernel that had the vulnerability.
But "patch administration" is an actual subject for us. Software should continue to work if we set up security patches or update to new releases because of the end-of-life coverage of a vendor. The revenue of the corporate is depending on the IT systems working. So "not breaking user space" is a security function for us, because a breakage of 1 part of our several ten 1000's of Linux programs will cease the roll-out of the safety replace.
One other problem is embedded software or firmware. As of late virtually all hardware programs embody an working system, often some Linux model, providing a fill network stack embedded to support remote administration. Recurrently those methods don't survive our obligatory safety scan, as a result of vendors nonetheless did not replace the embedded openssl.
The true problem is to offer a software stack that may be operated in the hostile atmosphere of the Internet sustaining full system integrity for ten years or even longer without any customer upkeep. The current state of software program engineering will require assist for an automated replace course of, but vendors must perceive that their enterprise mannequin must be able to finance the assets offering the updates.
Total I am optimistic, networked software program shouldn't be the primary know-how utilized by mankind inflicting problems that had been addressed later. Steam engine use may end in boiler explosions however the "engineers" were in a position to scale back this threat considerably over a couple of decades.

Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Link]

The following is all guess work; I'd be eager to know if others have proof either a technique or another on this: The individuals who discover ways to hack into these techniques through kernel vulnerabilities know that they expertise they've learnt have a market. Thus they don't tend to hack in an effort to wreak havoc - indeed on the whole the place knowledge has been stolen with a purpose to launch and embarrass people, it _appears_ as if these hacks are by means of a lot simpler vectors. I.e. lesser skilled hackers discover there is an entire load of low-hanging fruit which they can get at. They're not being paid ahead of time for the info, so they turn to extortion instead. They do not cover their tracks, and they will often be discovered and charged with criminal offences.
So in case your safety meets a sure fundamental degree of proficiency and/or your organization is not doing something that puts it near the top of "corporations we might prefer to embarrass" (I think the latter is far more practical at holding systems "safe" than the former), then the hackers that get into your system are more likely to be skilled, paid, and doubtless not going to do a lot harm - they're stealing knowledge for a competitor / state. So that doesn't bother your backside line - no less than not in a way which your shareholders will bear in mind of. So why fund safety?

Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (guest, #82661) [Hyperlink]

Then again, some efficient mitigation in kernel stage can be very helpful to crush cybercriminal/skiddie's attempt. If Minecraft pixelmon servers in every of your customer operating a future trading platform exposes some open API to their clients, and if the server has some reminiscence corruption bugs will be exploited remotely. Then you already know there are identified attack strategies( comparable to offset2lib) may also help the attacker make the weaponized exploit a lot simpler. Will you explain the failosophy "A bug is bug" to your customer and tell them it'd be ok? Btw, offset2lib is ineffective to PaX/Grsecurity's ASLR imp.
To essentially the most industrial uses, more safety mitigation within the software won't price you extra finances. You'll nonetheless have to do the regression check for each upgrade.

Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]

Needless to say I specialise in exterior internet-primarily based penetration-tests and that in-home exams (local LAN) will likely yield different outcomes.

Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link]

I keep studying this headline as "a new Minecraft moment", and pondering that maybe they've decided to follow up the .Web thing by open-sourcing Minecraft. Oh nicely. I imply, security is sweet too, I assume.

Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]

Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_every (subscriber, #28989) [Hyperlink]

Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Hyperlink]

Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]

Posted Nov 9, 2015 15:Fifty three UTC (Mon) by neiljerram (subscriber, #12005) [Hyperlink]

(Oh, and I was also nonetheless questioning how Minecraft had taught us about Linux performance - so thanks to the opposite remark thread that identified the 'd', not 'e'.)

Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Link]

I would identical to so as to add that in my opinion, there is a common drawback with the economics of pc security, which is very seen currently. Two problems even maybe.
First, the money spent on computer safety is commonly diverted towards the so-called safety "circus": quick, straightforward options that are primarily chosen just in order to "do something" and get better press. It took me a very long time - possibly a long time - to assert that no security mechanism at all is healthier than a bad mechanism. However now I firmly believe on this angle and would somewhat take the danger knowingly (provided that I can save money/resource for myself) than take a bad approach at fixing it (and haven't any money/useful resource left when i notice I should have carried out one thing else). And that i find there are numerous unhealthy or incomplete approaches at the moment accessible in the computer safety field.
These spilling our uncommon cash/resources on prepared-made ineffective tools should get the dangerous press they deserve. And, we actually have to enlighten the press on that as a result of it is not really easy to understand the effectivity of protection mechanisms (which, by definition, should forestall issues from happening).
Second, and that may be more moderen and extra worrying. The flow of cash/resource is oriented within the route of attack tools and vulnerabilities discovery a lot greater than within the route of latest safety mechanisms.
This is especially worrying as cyber "protection" initiatives look increasingly more like the same old idustrial tasks geared toward producing weapons or intelligence techniques. Furthermore, unhealthy useless weapons, as a result of they're only working against our very susceptible current methods; and dangerous intelligence programs as even primary school-stage encryption scares them right down to ineffective.
However, all of the ressources are for these grownup teenagers enjoying the white hat hackers with not-so-troublesome programming tricks or network monitoring or WWI-degree cryptanalysis. And now also for the cyberwarriors and cyberspies which have yet to prove their usefulness solely (particularly for peace protection...).
Personnally, I might fortunately go away them all the hype; but I will forcefully claim that they haven't any proper whatsoever on any of the funds allocation selections. Solely these engaged on protection should. And yep, it means we should decide where to place there sources. We have to assert the unique lock for ourselves this time. (and I guess the PaXteam could be among the primary to profit from such a change).
Whereas occupied with it, I wouldn't even depart white-hat or cyber-guys any hype in the end. That is extra publicity than they deserve.
I crave for the day I'll read within the newspaper that: "Another of those in poor health advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well known virus program code exploiting a programmer mistake and managed nonetheless to bring a kind of unfinished and bad high quality packages, X, that we are all obliged to use to its knees, annoying tens of millions of standard users along with his unlucky cyber-vandalism. All of the protection experts unanimously advocate that, once again, the price range of the cyber-command be retargetted, or no less than leveled-off, so as to bring extra safety engineer positions in the academic area or civilian industry. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional on this affair."

Hmmm - cyber-hooligans - I just like the label. Though it does not apply effectively to the battlefield-oriented variant.

Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Link]

The state of 'software program safety industry' is a f-ng disaster. Failure of the very best order. There is massive quantities of cash that goes into 'cyber security', however it is often spent on government compliance and audit efforts. This means instead of actually putting effort into correcting points and mitigating future problems, the vast majority of the effort goes into taking current functions and making them conform to committee-driven guidelines with the minimal quantity of effort and adjustments.
Some degree of regulation and standardization is absolutely wanted, however lay persons are clueless and are fully unable to discern the difference between someone who has precious expertise versus some firm that has spent millions on slick marketing and 'native promoting' on large web sites and pc magazines. The people with the money sadly solely have their very own judgment to depend on when shopping for into 'cyber security'.
> These spilling our uncommon cash/resources on prepared-made useless tools ought to get the bad press they deserve.
There is no such thing as a such factor as 'our rare money/assets'. You've your money, I have mine. Money being spent by some corporation like Redhat is their cash. Money being spent by governments is the federal government's cash. (you, literally, have way more management in how Walmart spends it is cash then over what your authorities does with their's)
> This is especially worrying as cyber "defense" initiatives look an increasing number of like the same old idustrial tasks aimed at producing weapons or intelligence techniques. Furthermore, bad ineffective weapons, because they're only working in opposition to our very weak current methods; and bad intelligence methods as even primary faculty-stage encryption scares them down to ineffective.
Having safe software with robust encryption mechanisms within the hands of the general public runs counter to the interests of most main governments. Governments, like any other for-revenue group, are primarily interested in self-preservation. Cash spent on drone initiatives or banking auditing/oversight regulation compliance is Way more priceless to them then making an attempt to assist the public have a safe mechanism for making cellphone calls. Particularly when those safe mechanisms interfere with information collection efforts.
Sadly you/I/us cannot depend upon some magical benefactor with deep pockets to sweep in and make Linux better. It's simply not going to happen.
Firms like Redhat have been massively useful to spending assets to make Linux kernel extra succesful.. nevertheless they're pushed by a the necessity to show a revenue, which suggests they need to cater directly to the the kind of necessities established by their customer base. Clients for EL are typically far more centered on decreasing prices associated with administration and software development then safety on the low-stage OS.
Enterprise Linux clients are inclined to rely on physical, human coverage, and network security to guard their 'gentle' interiors from being exposed to exterior threats.. assuming (rightly) that there is little or no they will do to really harden their programs. In actual fact when the selection comes between security vs convenience I am positive that most clients will happily defeat or strip out any safety mechanisms launched into Linux.
On high of that when most Enterprise software program is extraordinarily unhealthy. A lot so that 10 hours spent on improving a web entrance-finish will yield extra actual-world safety benefits then a 1000 hours spent on Linux kernel bugs for most companies.
Even for 'regular' Linux users a safety bug in their Firefox's NAPI flash plugin is way more devastating and poses a massively greater threat then a obscure Linux kernel buffer over circulation downside. It is simply not really important for attackers to get 'root' to get entry to the essential info... usually all of which is contained in a single user account.
Ultimately it's up to people such as you and myself to put the hassle and money into enhancing Linux security. For both ourselves and different individuals.

Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]

Spilling has always been the case, but now, to me and in laptop safety, most of the cash appears spilled resulting from bad religion. And this is generally your cash or mine: either tax-fueled governemental resources or corporate costs which are immediately reimputed on the costs of goods/software program we are informed we are *obliged* to buy. (Have a look at corporate firewalls, residence alarms or antivirus software program advertising discourse.)
I think it's time to point out that there are a number of "malicious malefactors" round and that there is a real need to identify and sanction them and confiscate the sources they have in some way managed to monopolize. And that i do *not* assume Linus is among such culprits by the way in which. But I believe he may be among those hiding their heads within the sand concerning the aforementioned evil actors, whereas he most likely has more leverage to counteract them or oblige them to reveal themselves than many people.
I discover that to be of brown-paper-bag stage (although head-in-the-sand is someway a brand new interpretation).
In the long run, I feel you are right to say that at present it's only as much as us people to try honestly to do something to enhance Linux or pc safety. However I still think that I am right to say that this isn't regular; especially while some very critical folks get very severe salaries to distribute randomly some difficult to evaluate budgets.
[1] A paradoxical situation while you give it some thought: in a site the place you might be in the beginning preoccupied by malicious individuals everybody ought to have factual, clear and honest habits as the first precedence of their thoughts.

Posted Nov 9, 2015 15:47 UTC (Mon) by MarcB (subscriber, #101804) [Link]

It even has a pleasant, seven line Primary-pseudo-code that describes the present situation and clearly shows that we are caught in an infinite loop. It does not reply the big query, though: How to jot down better software.
The unhappy factor is, that that is from 2005 and all the things that were obviously stupid ideas 10 years in the past have proliferated much more.

Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (visitor, #4654) [Link]

Be aware IMHO, we should examine additional why these dumb things proliferate and get a lot support.
If it is only human psychology, nicely, let's struggle it: e.g. Mozilla has shown us that they'll do fantastic issues given the suitable message.
If we're going through energetic folks exploiting public credulity: let's establish and battle them.
However, extra importantly, let's capitalize on this information and safe *our* systems, to showcase at a minimum (and more later on after all).
Your reference conclusion is especially nice to me. "problem [...] the conventional wisdom and the status quo": that job I would happily settle for.

Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]

That rant is itself a bunch of "empty calories". The converse to the items it rants about, which it's suggesting at some level, could be as unhealthy or worse, and indicative of the worst type of safety thinking that has put a lot of people off. Alternatively, it is only a rant that gives little of value.
Personally, I think there isn't any magic bullet. Safety is and at all times has been, in human history, an arms race between defenders and attackers, and one that's inherently a trade-off between usability, dangers and costs. If there are errors being made, it's that we should always in all probability spend more sources on defences that could block entire lessons of attacks. E.g., why is the GRSec kernel hardening stuff so arduous to use to regular distros (e.g. there is not any dependable supply of a GRSec kernel for Fedora or RHEL, is there?). Why does the entire Linux kernel run in a single security context? Why are we still writing lots of software program in C/C++, often without any primary security-checking abstractions (e.g. primary bounds-checking layers in between I/O and parsing layers, say)? Can hardware do extra to provide security with speed?
No doubt there are a lot of individuals engaged on "block courses of assaults" stuff, the query is, why aren't there more assets directed there?

Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link]

>There are a whole lot of the reason why Linux lags behind in defensive safety applied sciences, but one among the important thing ones is that the businesses making money on Linux haven't prioritized the development and integration of those technologies.
This looks as if a cause which is really value exploring. Why is it so?
I feel it's not apparent why this would not get some extra attention. Is it potential that the people with the cash are proper to not extra extremely prioritise this? Afterall, what curiosity do they have in an unsecure, exploitable kernel? Where there is frequent trigger, linux improvement will get resourced. It has been this manner for a few years. If filesystems qualify for common interest, absolutely security does. So there doesn't seem to be any obvious purpose why this concern doesn't get more mainstream consideration, except that it truly already gets enough. It's possible you'll say that catastrophe has not struck yet, that the iceberg has not been hit. But it surely seems to be that the linux development course of just isn't overly reactive elsewhere.

Posted Nov 10, 2015 15:Fifty three UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]

That is an fascinating question, certainly that's what they really believe no matter what they publicly say about their commitment to security applied sciences. What is the actually demonstrated downside for Kernel builders and the organizations that pay them, so far as I can inform there will not be sufficient consequence for the lack of Security to drive extra funding, so we're left begging and cajoling unconvincingly.

Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Hyperlink]

The important thing challenge with this domain is it pertains to malicious faults. So, when penalties manifest themselves, it is too late to act. And if the current dedication to a lack of voluntary technique persists, we're going to oscillate between phases of relaxed inconscience and anxious paranoia.
Admittedly, kernel developpers seem pretty resistant to paranoia. That is an effective factor. However I am waiting for the times the place armed land-drones patrol US streets in the neighborhood of their kids colleges for them to discover the feeling. They aren't so distants the times when innocent lives will unconsciouly rely on the security of (linux-based) computer programs; under water, that is already the case if I remember correctly my final dive, in addition to in a number of recent automobiles in accordance with some reports.

Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Link]

Classic hosting firms that use Linux as an uncovered front-finish system are retreating from growth while HPC, cell and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel of their instructions.
This is actually not that shocking: For internet hosting wants the kernel has been "finished" for quite some time now. Besides support for present hardware there is not a lot use for newer kernels. Linux 3.2, and even older, works simply positive.
Hosting does not need scalability to a whole bunch or thousands of CPU cores (one makes use of commodity hardware), complex instrumentation like perf or tracing (programs are locked down as much as possible) or superior energy-management (if the system doesn't have constant excessive load, it is not making sufficient money). So why ought to hosting firms nonetheless make robust investments in kernel improvement? Even if they'd something to contribute, the hurdles for contribution have change into larger and higher.
For their safety wants, hosting corporations already use Grsecurity. I have no numbers, but some expertise suggests that Grsecurity is basically a fixed requirement for shared hosting.
Alternatively, kernel security is almost irrelevant on nodes of a brilliant computer or on a system running massive enterprise databases which are wrapped in layers of middle-ware. And cell distributors merely do not care.

Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]

Linking

Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Hyperlink]

Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Link]

The assembled doubtless recall that in August 2011, kernel.org was root compromised. I'm sure the system's hard drives were sent off for forensic examination, and we've all been ready patiently for the reply to the most important question: What was the compromise vector? From shortly after the compromise was found on August 28, 2011, proper by way of April 1st, 2013, kernel.org included this word at the highest of the location Information: 'Because of all in your patience and understanding throughout our outage and please bear with us as we carry up the completely different kernel.org methods over the next few weeks. We will probably be writing up a report on the incident sooner or later.' (Emphasis added.) That remark was removed (together with the rest of the site Information) during a Could 2013 edit, and there hasn't been -- to my knowledge -- a peep about any report on the incident since then. This has been disappointing. When the Debian Project discovered sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted an excellent public report on exactly what occurred. Likewise, the Apache Foundation likewise did the suitable thing with good public autopsies of the 2010 Internet site breaches. Arstechnica's Dan Goodin was still trying to observe up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years ago. He wrote: Linux developer and maintainer Greg Kroah-Hartman advised Ars that the investigation has yet to be accomplished and gave no timetable for when a report could be launched. [...] Kroah-Hartman also advised Ars kernel.org programs have been rebuilt from scratch following the attack. Officials have developed new instruments and procedures since then, however he declined to say what they're. "There might be a report later this year about site [sic] has been engineered, but do not quote me on when it is going to be launched as I'm not chargeable for it," he wrote.
Who's responsible, then? Is anyone? Anybody? Bueller? Or is it a state secret, or what? Two years since Greg K-H said there could be a report 'later this 12 months', and 4 years since the meltdown, nothing but. How about some info? Rick Moen
[email protected]

Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Link]

Less seriously, observe that if even the Linux mafia doesn't know, it must be the venusians; they are notoriously stealth of their invasions.

Posted Nov 14, 2015 12:Forty six UTC (Sat) by error27 (subscriber, #8346) [Link]

I do know the kernel.org admins have given talks about a few of the new protections which have been put into place. There are no extra shell logins, as a substitute every part makes use of gitolite. The totally different companies are on completely different hosts. There are more kernel.org employees now. Individuals are utilizing two factor identification. Some other stuff. Do a seek for Konstantin Ryabitsev.

Posted Nov 14, 2015 15:Fifty eight UTC (Sat) by rickmoen (subscriber, #6943) [Hyperlink]

I beg your pardon if I was by some means unclear: That was said to have been the path of entry to the machine (and that i can readily believe that, because it was also the exact path to entry into shells.sourceforge.internet, many years prior, around 2002, and into many different shared Web hosts for a few years). However that isn't what's of main curiosity, and is not what the forensic research lengthy promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator within the August 2011 Dan Goodin article you cited: 'How they managed to take advantage of that to root access is at present unknown and is being investigated'. Ok, people, you've got now had 4 years of investigation. What was the path of escalation to root? (Also, different particulars that might logically be lined by a forensic study, corresponding to: Whose key was stolen? Who stole the key?) This is the kind of autopsy was promised prominently on the front page of kernel.org, to reporters, and elsewhere for a long time (after which summarily eliminated as a promise from the entrance web page of kernel.org, with out comment, together with the remainder of the location News part, and apparently dropped). It still can be acceptable to know and share that data. Especially the datum of whether or not the path to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen
[email protected]

Posted Nov 22, 2015 12:Forty two UTC (Sun) by rickmoen (subscriber, #6943) [Hyperlink]

I've accomplished a better overview of revelations that came out soon after the break-in, and suppose I've discovered the answer, via a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days before the public was knowledgeable), plus Aug. Thirty first comments to The Register's Dan Goodin by 'two security researchers who were briefed on the breach': Root escalation was through exploit of a Linux kernel safety hole: Per the 2 safety researchers, it was one both extremely embarrassing (broad-open entry to /dev/mem contents together with the running kernel's picture in RAM, in 2.6 kernels of that day) and known-exploitable for the prior six years by canned 'sploits, certainly one of which (Phalanx) was run by some script kiddie after entry utilizing stolen dev credentials. Other tidbits: - Site admins left the basis-compromised Web servers running with all companies still lit up, for multiple days. - Site admins and Linux Foundation sat on the knowledge and failed to tell the public for those self same a number of days. - Site admins and Linux Foundation have never revealed whether trojaned Linux source tarballs have been posted in the http/ftp tree for the 19+ days before they took the site down. (Sure, git checkout was fantastic, however what in regards to the hundreds of tarball downloads?) - After promising a report for several years after which quietly eradicating that promise from the entrance page of kernel.org, Linux Foundation now stonewalls press queries.
I posted my finest attempt at reconstructing the story, absent a real report from insiders, to SVLUG's foremost mailing listing yesterday. (Necessarily, there are surmises. If the people with the facts had been extra forthcoming, we would know what happened for sure.) I do should surprise: If there's one other embarrassing screwup, will we even be advised about it in any respect? Rick Moen
[email protected]

Posted Nov 22, 2015 14:25 UTC (Solar) by spender (visitor, #23067) [Hyperlink]

Additionally, it's preferable to use dwell memory acquisition prior to powering off the system, otherwise you lose out on reminiscence-resident artifacts which you could carry out forensics on.
-Brad

How about the lengthy overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Hyperlink]

Thanks for your feedback, Brad. I'd been counting on Dan Goodin's declare of Phalanx being what was used to realize root, within the bit the place he cited 'two security researchers who had been briefed on the breach' to that effect. Goodin additionally elaborated: 'Fellow security researcher Dan Rosenberg stated he was additionally briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the primary time I've heard of a rootkit being claimed to be bundled with an assault instrument, and that i famous that oddity in my posting to SVLUG. That having been said, yeah, the Phalanx README does not specifically claim this, so then perhaps Goodin and his a number of 'security researcher' sources blew that detail, and no one but kernel.org insiders but is aware of the escalation path used to achieve root. Also, it's preferable to use reside reminiscence acquisition prior to powering off the system, in any other case you lose out on reminiscence-resident artifacts that you would be able to carry out forensics on.
Arguable, however a tradeoff; you possibly can poke the compromised reside system for state information, however with the downside of leaving your system operating under hostile management. I was all the time taught that, on steadiness, it is better to drag energy to finish the intrusion. Rick Moen
[email protected]

Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (guest, #88005) [Link]

Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Hyperlink]

With "one thing" you mean those that produce these closed source drivers, proper?
If the "shopper product corporations" simply caught to utilizing components with mainlined open supply drivers, then updating their products could be a lot easier.

A new Mindcraft second?

Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]

They've ring zero privilege, can access protected memory straight, and cannot be audited. Trick a kernel into working a compromised module and it's recreation over.
Even tickle a bug in a "good" module, and it's most likely game over - on this case quite literally as such modules tend to be video drivers optimised for games ...

Homepage: https://minecraft-servers.biz/pixelmon/
     
 
what is notes.io
 

Notes.io is a web-based application for taking notes. You can take your notes and share with others people. If you like taking long notes, notes.io is designed for you. To date, over 8,000,000,000 notes created and continuing...

With notes.io;

  • * You can take a note from anywhere and any device with internet connection.
  • * You can share the notes in social platforms (YouTube, Facebook, Twitter, instagram etc.).
  • * You can quickly share your contents without website, blog and e-mail.
  • * You don't need to create any Account to share a note. As you wish you can use quick, easy and best shortened notes with sms, websites, e-mail, or messaging services (WhatsApp, iMessage, Telegram, Signal).
  • * Notes.io has fabulous infrastructure design for a short link and allows you to share the note as an easy and understandable link.

Fast: Notes.io is built for speed and performance. You can take a notes quickly and browse your archive.

Easy: Notes.io doesn’t require installation. Just write and share note!

Short: Notes.io’s url just 8 character. You’ll get shorten link of your note when you want to share. (Ex: notes.io/q )

Free: Notes.io works for 12 years and has been free since the day it was started.


You immediately create your first note and start sharing with the ones you wish. If you want to contact us, you can use the following communication channels;


Email: [email protected]

Twitter: http://twitter.com/notesio

Instagram: http://instagram.com/notes.io

Facebook: http://facebook.com/notesio



Regards;
Notes.io Team

     
 
Shortened Note Link
 
 
Looding Image
 
     
 
Long File
 
 

For written notes was greater than 18KB Unable to shorten.

To be smaller than 18KB, please organize your notes, or sign in.