There was a novice who learned much at the Master’s feet, but felt something to be missing. After meditating on his doubts for some time, he found the courage to approach Master Foo about his problem.
“Master Foo,†he asked “why do Unix users not employ antivirus programs? And defragmentors? And malware cleaners?â€
Master Foo smiled, and said “When your house is well constructed, there is no need to add pillars to keep the roof in place.â€
The novice replied “Would it not be better to use these things anyway, just to be certain?â€
Master Foo reached for a nearby ball of string, and began wrapping it around the novice’s feet.
“What are you doing?†the novice asked in surprise.
Master Foo replied simply: “Tying your shoes.â€
Upon hearing this, the novice was enlightened.
(Other koans here.)
No need for defragmentation? Or malware detection? Really?
The only think I use AV for is incoming e-mail, but that’s less for my protection, and more to tell friends and family who still use Windows that they have a problem. So my AV use is more of a community service.
Yes, really. You know, when you have to reference an experimental filesystem in order to claim that Unix needs defragmenters, you only make yourself look silly.
Brandon, that’s why I read my email with mutt over an ssh session to an Alphaserver running Gentoo. I’m immune to email worms.
…and Eric, ISTR ext4 was recently declared non-experimental, though I’m sticking with ext3 for now.
You know, responding only to ridicule a weak point in a comment, while ignoring the valid ones a commenter made is getting old.
Maybe trying to edit your post, adding a mention of rootkits, possibly considering their future effect when Linux users are as luserish and devoid of clue as Windows users are today, would be a far, far better thing.
Bonus points for adding a mention to the fact that an OS can be installed in under two hours, but when a virus/cracker/whatever deletes your files, they’re usually gone (backups? do lusers have backups?).
Or you could do what my friend is planning: browse the web on an emulated Amiga in a Linux partition dedicated to the task of emulating the Amiga. Yo dawg, I herd you like OS’s, etc.
Heh…actually, I’ve given serious consideration to replacing the Alpha with Linux running under Hercules on a nice fast multicore Opteron box…
As Jay Maynard pointed out, ext4dev is now ext4. Also, are xfs and ext2 non-experimental enough for you, or is that still silly?
>As Jay Maynard pointed out, ext4dev is now ext4. Also, are xfs and ext2 non-experimental enough for you, or is that still silly?
Actually, it is. To the best of my knowledge, none of these defraggers are required for production use. I’ve never run one.
Near as I can tell there are two use cases for running a defragger:
* Locality of reference on-disk makes many common file operations much faster. This is particularly true for a single-user, single-tasking OS with very slow disks, such as DOS in the bad old days. MU MT OS’s like Unix and modern-day Windows tend to have profoundly different disk usage patterns: depending on what kind of applications are running, several different processes may request accesses from all over the disk to be serviced by the kernel all in one go. This, combined with hardware and software disk caches, limits the advantage of defragmented files in most common use cases. (Where access speed is paramount, special provisions are often made; Oracle is sometimes given entire disks or partitions and accesses them with raw hardware calls.)
* Defragmenting a disk moves all allocated clusters to the lower areas of the disk, enabling the use of a partition tool like fips to resize a FAT partition. This is again not really an issue with modern FSes like ext2 and NTFS.
The problem is that as users migrated from DOS to Windows 9x to Windows NT-based OSes they carried the knowledge with them that regular defragging is an Inherent Good — and Microsoft, having never been one to educate its user base out of old bad habits, did little to disabuse them of this notion.
I generally only use an antivirus on Mac OS X and Linux when exchanging files. Even though malware won’t affect me, I don’t want to be a Typhoid Mary.
> To the best of my knowledge, none of these defraggers are required for production use. I’ve never run one.
Many windows boxes haven’t been defragged, either.
Me thinks you’ve tripped over your own shoelaces here, ESR.
>Many windows boxes haven’t been defragged, either.
That was my favorite dead chicken in Windows, but I quickly figured out that it didn’t do anything. I guess every other Windows user figured out the same thing.
>Many windows boxes haven’t been defragged, either.
No doubt those would be the ones that aren’t actually used for anything more than email and surfing and disk usage never goes over 25% of free space after installation.
Wake me up when Unix systems actually require defraggers and antivirus programs and malware cleansers. Until then, you and grendelkhan are both indulging in petty carping.
I notice that just about every thread on this blog contains at least some ‘petty carping’ from somebody.
Intrusion, rootkit, and binary-tampering (e.g. Tripwire) detection are essential parts of the toolkits of any Unix admin worth his/her salt, along with known-good binaries, and a sound backup-scrub-reinstall procedure should the worst come to pass.
Jeff, there’s big difference between needing countermeasures against random crap floating around in the tubes, and needing countermeasures against a determined cracker interested in your particular system. Tripwire and the like are of the latter sort, and even so they’re only worth bothering with on especially sensitive systems.
Daniel Franke,
I’ve had a debian box r00ted twice because I wasn’t careful. Not particularly sensitive, either: just a regular old personal laptop. Some old vulnerability in Sun RPC. So no, it doesn’t take a particularly determined cracker: just an asshole with nmap, a root vulnerability with exploits in the wild, and a lazy or inexperienced admin.
I don’t use Tripwire on my personal boxen these days, but a periodic chkrootkit helps me make sure I’m clean.
>> you and grendelkhan are both indulging in petty carping.
> I’ve had a debian box r00ted twice because I wasn’t careful.
(Jim notes that a full two decades have passed since the Morris worm now.)
Raymond? Raymond? Anyone?
Hm. Wikipedia says interesting things, such as
(http://en.wikipedia.org/wiki/Ext3#Defragmentation).
Is this wrong?
Eric:
My above comment was quoting from the link cited, and I tripped on this:
http://threeeighthsspacer.com/blog/2008/05/26/mad_security/
I don’t know exactly what word(s) made it choke, but I have the complete comment saved if you want to test.
Essentially, trying to post got me a nice 412 page:
Precondition Failed
The precondition on the request for the URL /wp-comments-post.php evaluated to false.
“I’ve had a debian box r00ted twice because I wasn’t careful.”
I think the point of the “koan” is that the fix would then be to rebuild the OS, not install some other program to look for and quarantine the code that triggers the flaw. It’s not that Unix systems don’t have flaws, its that the approach in such systems is to fix the fundamental structure of the OS rather than add another application (which could contain its own errors thus adding to the problem) on top of the fundamental OS.
Either that or I simply don’t get it.
Jeff Read > * Defragmenting a disk moves all allocated clusters to the lower areas of the disk, enabling the use of a partition tool like fips to resize a FAT partition. This is again not really an issue with modern FSes like ext2 and NTFS.
I think it is with NTFS. I recently had to defrag a 120 GB drive, all of it one Win XP NTFS partition, in order to repartition it for dual-boot. 55 % of the capacity was free, but in tiny, tiny fragments over the 120 GB drive. Ntfstools refused to reduce the size of the filesystem, though it claims to be able to this even on a fragmented fs. I guess there are limits as to how fragmented. This one was very bad indeed. Anyway, after defrag in XP (ran overnight), resizing NTFS and installing Linux was no problem.
It’s not about “haha, look at those lusers, they need separate programs to handle malware!”. People switching not understanding the things they so cling to aren’t things they need. Someone who is used to tightly tying the laces on their shoes may well miss the fact the other types of footware won’t slip off. Another metaphor that would work is a man moving from Alaska to Florida, and sleeping with his blanket every night for fear of freezing to death.
But this is the internet, so go back to the debate on technical semantics over the occasional person actually freezing to death in the worst part of a really bad winter in Florida.
From an aesthetic point of view, I think a proper koan would remove lines 3,4 and possibly 5.
Like code, koans always have one extra line.
A student of Tendai, a philosophical school of Buddhism, came to the Zen abode of Gasan as a pupil. When he was departing a few years later, Gasan warned him: “Studying the truth speculatively is useful as a way of collecting preaching material. But remember that unless you meditate constantly your light of truth may go out.”
Wow, I tip my hat, Jim – this was a really subtle and elegant kind of trolling. This was roughly on the level of the Churchill-Shaw exchange: “Have reserved two tickets for first night. Come and bring a friend if you have one.” “Impossible to come to first night. Will come to second night, if you have one.”
“Wake me up when Unix systems actually require defraggers and antivirus programs and malware cleansers. Until then, you and grendelkhan are both indulging in petty carping.”
Me thinks that the emperors ego is showing. ESR has a problem with his ego getting in the way at times.
The problem is that it might have been true at some point in time that *nixes didn’t need them, but the times they are a changin’. Or maybe Nothing gold stays.
http://ubuntuforums.org/showthread.php?t=1038090
post #9
A solution in search of a problem, as Slashdot would call it. Manual defragmentation programs are unnecessary as any decent filesystem performs optimization on-the-fly anyway.
This is a myth. Malware cleaners and antivirus programs are unnecessary on Unices because their share of the desktop market is too small, e.g. OS X whose design is laughable from a security perspective.
>This is a myth. Malware cleaners and antivirus programs are unnecessary on Unices because their share of the desktop market is too small, e.g. OS X whose design is laughable from a security perspective.
I’m fairly sure that’s not true. Linux, at least, has better security partly because its multi-user system wasn’t conceived by tacking it onto DOS, and partly because security updates go out quickly for known vulnerabilities. It may largely be a function of ‘most people who run Linux know what they’re doing’, but Linux does inherently have better security than Windows. And how is OS X laughable from a security perspective?
I’m fairly sure you are confusing Windows 95/98/ME with the operating systems descended from NT. In any case, malware gets onto people’s machines mostly via user stupidity and browser bugs, neither of which has anything to do with OS design. There is plenty of malware that does not require system privileges on Windows, and nor would it need them on Linux.
Turn-over time on exploit fixes is a matter of degrees. There is nothing ‘inherent’ here, it’s just a matter of Linux possibly doing a better job. That doesn’t discount the possibility of malware at all.
Various things are (or were last time I looked) are by-passable by using the Mach API rather than using the BSD API. Most desktop configurations have the main user logging in as the administrator for general use. Etc etc.
> Manual defragmentation programs are unnecessary as any decent filesystem performs optimization on-the-fly anyway.
This is really more a fact of today having more CPU than we know what to do with. When FAT was being engineered, the 8086 @ 10MHz was “cutting edge”, and PCs didn’t come with hard disks.
Fast-forward 30 or so years, and flash-based devices are the new dawn in storage, and will likely all but replace rotating media inside 10 years. (Moore’s law at work.)
Oddly, you don’t want to defrag a solid-state drive (since there is no seek penalty), and you still have a limited number of writes ‘available’, but you do want to write files in a linear fashion, when possible. Filesystems will need to adapt. Your children (or perhaps your sibling’s children) won’t understand the concept of ‘defragmentation’.
>And how is OS X laughable from a security perspective?
Someone hasn’t played much with Objective-C.
Fire up Google and whisper “Mach Injection” to it.
There are a plethora of other (local) issues which Apple has been quite slow to deal with. Here a short list from 2005:
http://21c3.annulator.de/
and some more recent (Leopard) stuff to read: http://nchovy.kr/uploads/3/301/D1T1%20-%20Dino%20Dai%20Zovi%20-%20Mac%20OS%20Xploitation.pdf
Anyone who sits back and claims that his / her *nix box is secure (especially after a fresh installation) has been asleep since Jan 1, 1970.
> a really subtle and elegant kind of trolling.
Shepen, glad you enjoyed it.
> It’s not that Unix systems don’t have flaws, its that the approach in such systems is to fix the fundamental structure of the OS
You are unwise to take this position. The architecture is not a panacea, at least, not until you get an A-level secure version of *nix.
http://cm.bell-labs.com/who/ken/trust.html
(Though IIRC, B1 or B2 would stop this particular attack.)
This effectively counters the “given enough eyeballs, all bugs are shallow” response as well.
To quote (Ken) Thompson:
>> It’s not that Unix systems don’t have flaws, its that the approach in such systems is to fix the fundamental structure of the OS
>You are unwise to take this position. The architecture is not a panacea, at least, not until you get an A-level secure version of *nix.
Nobody has claimed that the Unix architecture is a panacea, so you can cease clobbering that strawman. The person to whom you are replying observed — correctly — that in the Unix world we try to address security problems at a level deeper than throwing spackle at the surface cracks. This doesn’t lead to perfect results, but it does lead to better ones.
>Nobody has claimed that the Unix architecture is a panacea
Are you saying that Unix-based systems don’t suffer from malware and virus problems, not necessarily because of the OS’s architecture choices, but because the development model allows for the architecture flaws to be addressed and revamped more readily?
If that’s the case, then the argument, I suppose is not that *Unix* systems don’t require these corrective ‘spackle’ but that open-source systems don’t require it (although currently, I guess most, if not all open source OSs are unixy)
>Are you saying that Unix-based systems don’t suffer from malware and virus problems, not necessarily because of the OS’s architecture choices, but because the development model allows for the architecture flaws to be addressed and revamped more readily?
Not quite. The assertion you are attributing to me is true of open-source systems, but open source proceeds from a prior Unix culture that put a high value on addressing the disease rather than the symptom. Even in the era of closed source, it wasn’t our normal method to just spackle over security cracks.
From here:
Once again, Eric is shooting his mouth off about a subject he knows little about. Much of the botnet problem is caused by e-mail users opening and running programs that are sent to them via e-mail, after being warned of the risks of doing so. Some of the largest botnets use this as their primary means of transmission. This has nothing to do with operating system design and everything to do with the userbase. There is no ‘fix’ for this problem other than identifying these programs and removing them, i.e. malware cleaners.
>Much of the botnet problem is caused by e-mail users opening and running programs that are sent to them via e-mail, after being warned of the risks of doing so
But these userspace cracks wouldn’t be anywhere near as serious a problem if Windows weren’t riddled with easy privilege-escalation holes. That part is in fact evidence of Microsoft’s incompetence.
> But these userspace cracks wouldn’t be anywhere near as serious a problem if Windows weren’t riddled with easy privilege-escalation holes.
Not really, or at least Linux is just as bad. An email worm targeted at Linux could do everything it needed to do (propogate, send spam, host warez, etc) if the user was foolish enough to click on it, assuming the MUA wasn’t jailed using something like SELinux. Even if the worm needed to obtain root in order to do its dirty work, it could easily do so by installing a su trojan.
>An email worm targeted at Linux could do everything it needed to do (propogate, send spam, host warez, etc)
You had to say “could” rather than “is” because this doesn’t actually happen. Why? First, because we know better than to write MUAs that allow such attacks. And second, because we do in fact deploy things like SELinux — it’s enabled by default in Ubuntu. So the claim “Linux is just as bad” is false, whether evaluated with respect to userland development practice or system security.
@daniel> it could easily do [on linux] so by installing a su trojan.
… or any number of (other) privilege escalation paths.
The distribution model on most linux distros is ripe with opportunity for just the type of attack that Ken Thompson outlined in the early 80s. (Long before esr’s “all bugs are shallow”.)
@esr> “Nobody has claimed that the Unix architecture is a panacea”
“When your house is well constructed, there is no need to add pillars to keep the roof in place.†— “Master Foo”
To be clear about my agenda. I suggest that Eric would contribute *more* to the “open source” community by explaining that all is not well, that Microsh*t isn’t the source of all problems, and that the linux kernel model is not magic.
The attack will come (eventually, especially as linux succeeds, especially along the ‘infrastructure” vector) unless the “community” stops the navel-gazing and gets serious about security.
> First, because we know better than to write MUAs that allow such attacks.
This isn’t a matter of exploiting the MUA; allowing the user the ability to extract and run attached executables from an email file is not a bug. Perhaps most Linux MUAs force you to save the file to disk first and execute it manually. I don’t know; I’ve obviously never tried, and in any case I use Gnus for email, which places me in a tiny minority. Anyway, at some point around here it ceases to be a matter of software design practice and starts to be an instance of the maxim that there is no patch for human stupidity.
> And second, because we do in fact deploy things like SELinux — it’s enabled by default in Ubuntu.
The default policy provides no security against such attacks. More severe policies exist, but these cripple most systems.
Anyway, I think the primary reason that these don’t happen is simply that the intersection of people clueful enough to run Linux on their desktop and people clueless enough to click on executable email attachments is sufficiently small that no blackhat in his right mind would ever bother.
Also, despite Linux’s generally excellent track record, proof of concept for worm exploitability does exist; recall the Ramen Worm. Granted this was eight years ago and things have gotten a lot better, but it’s not long ago enough to be ancient history.
Eric, Unix is run by geeks and has a user base of maybe a few percent of computers. Windows is run by people who barely know what a mouse is and has a market share of over 90% of computers. You don’t need antivirus software on Unix because few people will bother writing a Unix virus, and even fewer are dumb enough to spread it. If Unix had 90% market share and was in hundreds of millions of homes, you’d need antivirus software almost as much as you do on Windows. I’ll grant that it’s better-built, but no defence is perfect, especially not against social engineering viruses, which most are in one form or another.
As for defragging, I have an old XP box I abuse mercilessly, with a 95% full disk, constant file additions and deletions, serious multitasking, everything you can do to make defragging relevant. I defrag it about every other year, and the reason it’s so rare is that it doesn’t result in any noticeable change. Like others above have said, it’s a dead issue for the vast majority of users.
“A solution in search of a problem, as Slashdot would call it. Manual defragmentation programs are unnecessary as any decent filesystem performs optimization on-the-fly anyway.”
I’ll repeat: http://en.wikipedia.org/wiki/Ext3#Defragmentation seems to say otherwise. In particular, that a defragmenter was _needed_ and _missing_ in ext3 for some applications, and so it will be added to ext4. Granted, it’s a limited scenario (high load media swervers), but it’s one that’s becoming more common by the hour.
On what are you basing this evaluation? Mail can readily be sent from non-root processes, and with sufficient sophistication banking details can also be captured from the browser.
Interesting, but irrelevant to the discussion. We were talking about the myth that Linux is not susceptible to malware.
“Interesting, but irrelevant to the discussion. We were talking about the myth that Linux is not susceptible to malware.”
Hey! ‘It’s Not As Bad As’ Windows!
‘Better than Vista’ is not a good enough baseline. But it’s certainly easier to make that argument than reason about Linux’s shortcomings.
>The distribution model on most linux distros is ripe with opportunity for just the type of attack that Ken Thompson outlined in the early 80s
In fact, there has been at least one attempt to subvert the distribution path – it involved crocked postfix binaries. I’m not sure of the date but IIRC it was around 2002. It was spotted within hours. While this is no guarantee that such an attack will never do damage, the timescale suggests that the Linux ecology is fairly robust against such attacks.
I would explain how your interpretation of “well-constructed house” is erroneous, but doing so would violate the form and purpose of the koan, which is intended to provoke thought rather than enable settling on a conclusion.
>To be clear about my agenda. I suggest that Eric would contribute *more* to the “open source†community by explaining that all is not well, that Microsh*t isn’t the source of all problems, and that the linux kernel model is not magic.
None of these propositions are in question, certainly not by me. You have failed to understand the koan, and should draw wood and chop water for more years, until your horns cannot fit through the door :-)
>We were talking about the myth that Linux is not susceptible to malware.
You are the person who thread-jacked to suggest that I had been shooting my mouth off, fool. It remains the case that those easy escalation holes do exist in profusion, and they are evidence of Microsoft’s incompetence, just as I had originally stated in that different thread. And the escalation holes matter for many reasons; among other things, they give malware ways to cloak itself that would be unavailable to a userland program.
Before you accuse others of not knowing what they are talking about, it would be wise to have a better grasp of the facts yourself.
>If Unix had 90% market share and was in hundreds of millions of homes, you’d need antivirus software almost as much as you do on Windows.
This is the silliest misconception attached to these sorts of discussion. If market share were a good predictor of exploited vulnerabilities, Apache defacements would be roughly three times as common as IIS ones. This is not the case.
What? How can you extrapolate this from two data points? It’s not even in the same design space. Apache/IIS installations don’t have humans in the loop to download and run enticing looking executables.
I didn’t say they didn’t matter. I stated that they were not the reason malware was in such profusion, which you did state. You were wrong, and you can either admit it, or be prepared for anyone who’s even slightly informed to think you’re ignorant on yet another topic you profess supposed wisdom on.
>What? How can you extrapolate this from two data points?
It’s one instance of a more general pattern. Firefox — which does have humans in the loop — also has a much lower incidence of cracks than its proportional market share with IE7 would suggest. Sound design actually matters.
Now you’re confused. You were talking about market share not being a predictor of hacking incidents. In this instance, IE has both the larger market share and the larger share of incidents. Good job shooting yourself in the foot.
>I stated that they were not the reason malware was in such profusion, which you did state
And which remains true. Malware is in profusion partly because it is difficult to eradicate on systems where it can secrete itself in places userland programs cannot readily audit. Furthermore, the ability to take admin privileges means that sufficiently clever malware can foil cleaner programs by subverting the system services they use to do their audits.
For someone who wants to accuse me of not knowing what I’m talking about, you seem remarkably ignorant of your chosen topic. These are both well-known issues.
>IE has both the larger market share and the larger share of incidents. Good job shooting yourself in the foot.
I see that you require instruction in the meaning of the word “proportional” and elementary statistical threat modeling. Perhaps you should go acquire some before you go shooting your mouth off about security analysis?
Nice ad hominem, but you are out of your depth once again. You are mistakenly attributing the success of malware on Windows to a handful of specific techniques that are not in any way universal to malware infection or even necessary for a successful botnet. I am well aware of these issues, I am just not mistakenly overstating their importance as you are.
I am aware of what the word “proportional” means, and what you meant in your comment. Once again I will go back to your incorrect claim:
You then proceeded to cite IE/Firefox as a counter-example to this. Your use of “per installation” statistics is your mistake, and irrelevant to the fact that this is a counter-example to your claim.
Citing the percentage of apache installs and the percentage of cracks/exploits in apache versus the percentage of windows installs and the number of windows vulns makes it look like the expertise of the people manning those installs is equal, which is not the case. A sysadmin usually knows a bit more than J Random Luser.
I have used the ratio of Apache cracks to IIS ones as an indication that design matters more than market share as a predictor of security incident volumes.
I should point out that this is an even stronger argument than it might at first appear. for two reasons.
One: because Apache runs an especially high percentage of the e-commerce, finance, and banking sites that are most attractive to crackers and black hats. If the bad guys could subvert Apache more frequently, they certainly would.
Two: The observed pattern of defects and defacements also suggests that the design superiority of Unix and Linux over Windows matters as well. Apache sites tend to be running on the former and IIS is exclusively on the latter.
On the other hand, if you model the rate of actual exploits on the assumption that it is bounded above by actual vulnerability (as opposed to being mainly a function of market share) the observed incident patterns make perfect sense.
>A sysadmin usually knows a bit more than J Random Luser.
Indeed. I don’t discount this. But it’s equally a flaw in the argument that exploit rates are predicted by market share.
Actual vulnerability is not merely a function of system architecture but of the entire technical culture and ecology that surrounds an operating system. If more people understood this, this thread would have avoided several straw-man arguments.
>I am well aware of these issues, I am just not mistakenly overstating their importance as you are.
And your thrashing gets more pathetic. Arguing with me about security engineering from your limited knowledge base is like bringing a knife to a gunfight, kid. Unless you’re much older than you sound, I’ve been dealing with these issues in theory and practice since before you were born, and on more varied operating systems than you know exist.
“Firefox — which does have humans in the loop — also has a much lower incidence of cracks than its proportional market share with IE7 would suggest.”
Obviously IE and FF users are not random samples. The high-end models of Volvo have much less accidents than, er, ]]insert some cheap crappy car here[[, not only because the car is technically safer (it is) but because the high-end-Volvo user base is less likely to drive shitface drunk around 3AM at 80 MPH in a city. I’m not saying FF is not safer but just saying you don’t quite know how much safer it is until you somehow control for the correlation between FF-usage and tech-knowledge.
BTW, Marshal, I must correct one thing in your comment on OS X: Logging in as an administrator on OS X means something very much different from what it means on Windows. On Windows, when you’re logged in as Administrator, you have the equivalent of root access. On OS X, all it means is that you’re a member of the wheel group, which gives you the ability to make configuration changes manually, and install software and change system files after manually re-authenticating (“Installer requires that you enter your password”). An unprivileged process run by an administrator login has exactly the same privileges as an unprivileged process run by a non-administrator.
>I’m not saying FF is not safer but just saying you don’t quite know how much safer it is until you somehow control for the correlation between FF-usage and tech-knowledge.
Agreed – see my reply to Adriano on the same topic.
I think you may have the relative weight of the examples backwards, though. The correlation between Apache deployment and strong admin skills is probably stronger than the correlation between Firefox use and end-user cluefulness.
That’s a detail, though. The real point is that there is no reasonable model of either the server or browser ecology in which market share predicts the incidence of cracking more effectively than differences in actual vulnerability do; the extent to which actual vulnerability reflects various other factors (design of the OS codebase, skill of the admins) is a subsidiary question.
>Perhaps most Linux MUAs force you to save the file to disk first and execute it manually.
Yes, it’s so.
>The default [selinux] policy provides no security against such attacks. More severe policies exist, but these cripple most systems.
Still, server admins use them and tune as necessary, and Linux distributions are moving towards stricter policies as they figure out workable rulesets. I’m not arguing that this is perfection, just that it is an example of an approach that is design-centered rather that merely plugging individual holes in a reactive way.
Our results aren’t perfect, but our process is better. One prediction this generates is that, over time, the rate of exploits normalized per thousand Unix/Linux systems will actually fall as architectural improvements close off entire classes of holes; this is not an expectation we could have from a process that simply patches individual holes in an ad-hoc way.
I still haven’t seen an interesting extrapolation of what would happen if the number of lusers Windows has right now started using Linux.
Especially now that Vista is training them to ‘insert your password to do *OOH SHINY*’, ‘just like sudo’.
I’ll clarify: up to now, sudo works for its purpose, because the people who use sudo systems are, by and large, clued, and don’t recklessly fill their computers with noxious crap.
When J Random Luser starts his Manbuntu system, and goes to download ringtones/smileys/torrents from their usual infested websites, I don’t have much hope that he’ll fare better than on Windows. This doesn’t assume a badly designed system, it seems to me.
That’s also assuming the malware needs wheel. As stated before, mail can be sent from ports > 1024, or anyway without needing wheel. And, as I said before, the important part of the data, right now, are the user’s documents, which are in many cases irreplaceable, and not the install data, which sits on plastic.
Perhaps someone can clarify how is a Linux system actually safer in this scenario. Hasn’t this kind of malware got access to the user’s address book to replicate? or access to the session file, to restart itself on login?
> In fact, there has been at least one attempt to subvert the distribution path – it involved crocked postfix binaries. I’m not sure of the date but IIRC it was around 2002.
Are you sure you don’t mean sendmail? Some time around 2002, sendmail.org got cracked and a trojan inserted into the source distribution. I remember this clearly because I was running an LFS system at the time, had chosen that day to upgrade some packages, and got burned by it.
>I still haven’t seen an interesting extrapolation of what would happen if the number of lusers Windows has right now started using Linux.
My guess? We’d see a large initial spike in infiltrations, followed by a scramble as the distros put stricter policies in place (up-gunning the selinux and firewalling rules, in particular). The spike would subside quickly because our update cycle is quite short (see for example the recent counterdeployment against the Kaminsky hole). When the dust settled, we’d have a ‘sploit-per-month frequency a couple times higher than we do now, but I wouldn’t bet on it rising past statistical noise level.
>When J Random Luser starts his Manbuntu system, and goes to download ringtones/smileys/torrents from their usual infested websites, I don’t have much hope that he’ll fare better than on Windows.
I think that hope is reasonable. Think about how downloads are handled on a Linux system – they’re never just executed. There’s not even an option to enable that in Firefox or Kmail/Evolution – the closest you can come is a pop-up box that asks if you want to hand off this filetype to a particular app. So, as unless the apps are mis-designed to execute untrusted data, JRL can’t fuck himself up. In fact, they are designed carefully not to do that, and this didn’t happen by accident; Unix tradition matters.
This of course doesn’t prevent attacks via handler applications that can be subverted into executing injected code via an accidental hole, such as a stack-overwrite attack enabled by an unchecked buffer-pointer access. But there are a couple of architectural features in Unix/Linux that reduce an attacker’s likelihood of finding holes like that. One is the separation between GUI apps and the X server.
A more realistic threat vector would be malware in repos, like the subversion of postfix binaries a few years back. The design of Unix/Linux can’t prevent this, but there is at least some evidence that many-eyeballs spots such infestations rapidly.
>That’s also assuming the malware needs wheel.
Malware doesn’t need wheel to function, but it needs wheel to hide from or misdirect its predators.
Actually, I’m wrong. Malware needs wheel for one of its functions, which is covert keylogging to capture things like credit-card numbers. To do that in any useful way (for very malicious values of useful) you need to be able to subvert the kernel or the system-default browser. That takes wheel.
>And, as I said before, the important part of the data, right now, are the user’s documents, which are in many cases irreplaceable, and not the install data, which sits on plastic.
In fact, in the overwhelming majority of malware attacks, the user’s data has no importance whatsoever (with a partial exception for address books). The attacker gains nothing from damaging it (that might alert the system owner that he’s been zombied) and is not normally able to deduce anything very critical from it (because people don’t put their SSNs and credit-card numbers in predictable locations on their PCs). Even address books aren’t that important; it’s just as effective, given most Windows users’ usage patterns, to crack root and snoop incoming email addresses.
>Perhaps someone can clarify how is a Linux system actually safer in this scenario.
Most malware attacks on PCs are either botnet recruiting or attempts to install a keylogger. The latter is difficult if you don’t have privilege escalation holes, for reasons previously noted. The former can be done in userspace, but can’t be stealthed without wheel (you can’t overwrite logs and that sort of thing).
If malware attacks ever looked like they were starting to became common on Linux systems, one answer would be a daemon that does TCP/IP traffic analysis and alerts the system owner to bot-like activity on his machine. You’d need wheel to subvert the daemon. Good selinux rules and use of the immutable bit might be able to scupper that attack with near certainty.
Attacks on big servers have a different payoff and a different threat model, of course. So do targeted attacks on individual PCs for purposes like industrial espionage – in that case data accessible from userspace might be valuable. You’d still need wheel to stealth your intrusion with, though.
>Are you sure you don’t mean sendmail?
I’m not doubting your memory that there was a sendmail incident, but part of what it attached to this tag in my mind is a memory of Wietse Venema getting personally involved in the cleanup. Possibly I have conflated the date of the sendmail incident with an earlier postfix one.
“I think that hope is reasonable. Think about how downloads are handled on a Linux system – they’re never just executed. ”
I thought the modus vivendi of the luser is ‘follow immediate directions to porn/satisfaction’, and it implies clicking through, whatever the cost.
An experience countering your statement:
Right now, you can download a .deb package in ubuntu (just done it a few days ago, for zsnes) and be asked to install it directly from the browser, which effectively gives it whatever power it wants if you’re stupid enough to type your password in the gksudo field (it can now be installed with admin rights, cover its tracks, keylog the fuck out of you… Or not?
I think we’re going to have more interesting times than you guess in your post.
Otherwise, thanks for your answer.
>Right now, you can download a .deb package in ubuntu (just done it a few days ago, for zsnes) and be asked to install it directly from the browser, which effectively gives it whatever power it wants if you’re stupid enough to type your password in the gksudo field (it can now be installed with admin rights, cover its tracks, keylog the fuck out of you… Or not?
It’s actually possible to do this safely. With the right protocols and digital signatures, you can secure this kind of channel against anything short of a subverted router IP-spoofing you to the wrong verification site. (This is why your package manager keeps around GPG keys for critical repos.)
Mind you, I don’t know that the facility you’re describing has been implemented correctly, and professional caution requires me to be suspicious of it. But I know it’s possible and should be within the capability of the Ubuntu folks.
>I think we’re going to have more interesting times than you guess in your post.
There was similar talk after the Morris worm. It came to nothing, and Unix programmers are much more careful about security auditing today than they were in 1988. Of course, past record is not a guarantee of future performance, but insurers whose job it is to price in these sort of risks are not setting their rates very high.
I agree with what you’re saying, it’s only that you and I seem to have a different baseline for stupid.
My stupid user, confronted with a sign saying “This package comes from an unknown repo. It will rape your children. Don’t install it, for the love of God. AAAAAIEEEE!” would just click through without reading. Because they never read.
I don’t think there’s much you can do to solve this PEBKAC problem apart from smashing their dumb little fingers with a morningstar.
> It’s actually possible to do this safely
Perhaps you mean to treat all packages installed from browsers in a different manner, and with different permissions? I hadn’t thought of that.
> I don’t know that the facility you’re describing has been implemented correctly
What I was describing is stock Ubuntu: firefox 3, go to some website, download a .deb, you’ll be asked (in gnome) if you want to open it with (I think) Gdebi. Insert password. flush.
>Perhaps you mean to treat all packages installed from browsers in a different manner, and with different permissions? I hadn’t thought of that.
Well, of course that would be part of it.
>> I don’t know that the facility you’re describing has been implemented correctly
>What I was describing is stock Ubuntu: firefox 3, go to some website, download a .deb, you’ll be asked (in gnome) if you want to open it with (I think) Gdebi. Insert password. flush.
Perhaps you misunderstand, or I’m misunderstanding you. I believe it works as you describe. I also pretty much know how one could design a chain of transmission that would guarantee that if you trust the site you can trust the deb package, and I think it’s a good bet that the feature is actually designed that way. What I don’t know is that the chain of transmission is free of implementation holes.
I think there’s also a fair bit of misunderstanding here over what counts as a security vulnerability.
For example, does going to evilrandomsite.com and getting a virus count as user error, or an exploit? How about opening an email? an email attachment? If I run evilfile.exe, or bin as the case may be, does that make everything it does after that my fault? Even if it doesn’t abuse any holes in the system?
Obviously, we don’t want to say “No, that’s bad, you can’t run that” to the user; the user is always right, even when they’re wrong. Sort of like free speech, you can say ‘you shouldn’t do that’, but making it impossible leads to more problems.
In the koan, you could argue whether it’s referring to running those programs on a regular basis; whether it means “every [day/week/month/few months] I use this program, to make sure my computer stays usable”, or whether it means “I only use these programs when I have to”. Those are two very different things, especially when the second one would only apply to rare cases.
slightly off topic, but how well would malware be able to keep itself from being deleted on linux, anyway? I have a friend who, despite being very knowledgeable in regards to computers, technology, and windows, had a helluva time removing adware that had gotten on to a family member’s computer. Of course, his A/V software didn’t have a fix for it. Took him hours to track down how and where it was starting before he could kill it.
Oh, I had understood you wanted clarification about the process. Thanks for the explanation.
@Andrew T: Agreed. I’m only wondering how rare the rare cases will be. ESR gives hope.
Interesting. For those of you who still doubt my assertion that design of Unix is fundamentally more sound and this matters for security, read Interview with an Adware Author.
As we says, “Windows processes, by the way, are insanely promiscuous”. Many of the persistence and stealthing tricks he describes can’t be pulled off at all under Unix; most of those that can require root. (Yes, Marshal my child, privilege escalation holes actually do matter.) And then there’s this gem:
Q: In your professional opinion, how can people avoid adware?
A: Run UNIX.
Somewhere, Master Foo is smiling.
“why do Unix users not employ antivirus programs?”
Because they are smart enough to not run as root. Unfortunately a culture has built up in other OSes such that programmers believe it is perfectly reasonable to expect all users to run their programs as root so they have access to all sorts of OS features. But that also means programs started as the average Windows user have full access to the system where programs started by the average Linux user do not. “If your user is already root or the equivalent, there is not need to escalate privileges.”
“And defragmentors?”
Because disk fragmentation was solved before 1984 ( http://www.cs.berkeley.edu/~brewer/cs262/FFS.pdf ).
“And malware cleaners?”
Again, because they are smart enough to not run as root. The average Windows user can update the kernel unintentionally by clicking on an email attachment where the average Unix user cannot.
FYI Eric is now deleting my comments, as he does not wish to look foolish.
ESR says: No, silly child, I am deleting your comments because you have nothing to spew but gas and obstreperousness. I will continue doing so until you actually contribute something to the discussion.
This is factually incorrect. All you need to ‘subvert’ the browser is to attach a debugger (ptrace), which on the default installation of xubuntu can be done without wheel.
> ll you need to ’subvert’ the browser is to attach a debugger
You’re correct there. Though this one place where a good SELinux policy could be helpful. Nothing except gdb should be allowed to run ptrace, and nothing except your IDE or an interactive shell session should be allowed to run gdb.
>All you need to ’subvert’ the browser is to attach a debugger (ptrace), which on the default installation of xubuntu can be done without wheel.
Ah, now you’ve said something interesting. I’ll try this on Ubuntu (not Kubuntu) and see if it works.
Not just any old ptrace attachment will do, however. You’d need to (a) induce the user to execute the ptraced binary under the belief it was their normal browser, and (b) somehow inject keylogger code, Even the first part wouldn’t be easy – I can’t think of a way to do it without root, offhand, unless dot is in the user’s path and you can con them into running the browser from a shell prompt in a directory of your choice.
This doesn’t seem like a very practical attack. Supposing it were, DF is correct to point out that a selinux rule would scotch it pretty handily. Actually, selinux would be overkill in this case; I could think of several other simpler ways.
You don’t need to induce the user to do anything – you can simply watch for instances of Firefox and attach them as they are launched. All that is necessary to read text off a form is to break-point the appropriate place and read some values out of memory (this does not require root). This kind of thing is bread-and-butter for “crackers”.
>All that is necessary to read text off a form is to break-point the appropriate place and read some values out of memory
No sale; you can’t ptrace across a privilege boundary like that (I checked). I would have been astonished if it were otherwise.
What privilege boundary? Firefox runs as your user. Attach GDB and breakpoint a function and read some values out of memory. It will work.
> What privilege boundary? Firefox runs as your user. Attach GDB and breakpoint a function and read some values out
> of memory. It will work.
Perhaps esr can embark on a bit of self-education with pcat.
http://tct.sourcearchive.com/documentation/1.18/pcat_8c-source.html
> Somewhere, Master Foo is smiling.
Because the adware author doesn’t want to work hard?
Master Bar says, “Feh.”
Feh. See Mosquito Lisp (now Wasp Lisp): http://waspvm.googlepages.com/
>What privilege boundary? Firefox runs as your user. Attach GDB and breakpoint a function and read some values out of memory. It will work.
I’ll test, but I still see serious problems with this approach. One is selinux; another commenter has already suggested a policy that would stop it cold.
Sorry for interrupting the discussion, but I noticed that your koans page on catb.org cites this particular one as being written on 2008-01-12; either this is a mistake, or you’re just posting year-old material. I think the former is more likely given the HTTP headers sent :-)
$ curl -I http://catb.org/esr/writings/unix-koans/index.html
[…]
Last-Modified: Sun, 11 Jan 2009 14:00:46 GMT
> Last-Modified: Sun, 11 Jan 2009 14:00:46 GMT
Master Bar is unhappy that the student hasn’t yet learned to automate the revision history, with, say, a source code control system, but seems willing enough to “fake it”.
An SCM would be overkill for such a work… especially in the pre-Git/Mercurial days that was started in, where all the DVCSes sucked, and centralized control like CVS (or the not-yet-released-as-of-2003 Subversion) would be especially overkill.
I have no difficulty attaching gdb to a running firefox process on Ubuntu Server Edition.
dfranke@feanor:~$ ps aux | grep firefox
dfranke 8823 0.0 0.0 5164 836 pts/2 R+ 21:55 0:00 grep firefox
dfranke 22376 0.0 0.0 3944 552 ? Ss Jan05 0:00 /bin/sh -c firefox
dfranke 22377 0.6 6.2 983556 507548 ? Sl Jan05 72:38 /usr/lib/firefox-3.0.5/firefox
dfranke@feanor:~$ gdb
GNU gdb 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type “show copying”
and “show warranty” for details.
This GDB was configured as “x86_64-linux-gnu”.
(gdb) attach 22377
Attaching to process 22377
Reading symbols from /usr/lib/firefox-3.0.5/firefox…(no debugging symbols found)…done.
Reading symbols from /lib/libpthread.so.0…(no debugging symbols found)…done.
—snip—
(lots o’ spam)
—snip—
(gdb) bt
#0 0x00007f24590edc86 in poll () from /lib/libc.so.6
#1 0x00007f2454e2d4d2 in ?? () from /usr/lib/libglib-2.0.so.0
#2 0x00007f2454e2db5b in g_main_context_iteration () from /usr/lib/libglib-2.0.so.0
#3 0x00007f2457bc465d in ?? () from /usr/lib/xulrunner-1.9.0.5/libxul.so
#4 0x00007f2457bc49ba in ?? () from /usr/lib/xulrunner-1.9.0.5/libxul.so
#5 0x00007f2457c6dfed in ?? () from /usr/lib/xulrunner-1.9.0.5/libxul.so
#6 0x00007f2457c422e5 in ?? () from /usr/lib/xulrunner-1.9.0.5/libxul.so
#7 0x00007f2457bc473f in ?? () from /usr/lib/xulrunner-1.9.0.5/libxul.so
#8 0x00007f2457a5c63f in ?? () from /usr/lib/xulrunner-1.9.0.5/libxul.so
#9 0x00007f24574d30e8 in XRE_main () from /usr/lib/xulrunner-1.9.0.5/libxul.so
#10 0x00000000004014f7 in ?? ()
#11 0x00007f245903d1c4 in __libc_start_main () from /lib/libc.so.6
#12 0x00000000004010f9 in ?? ()
#13 0x00007fff62156de8 in ?? ()
#14 0x0000000000000000 in ?? ()
> An SCM would be overkill for such a work…
Dude, I keep all my *nix config files (/etc/passwd, /etc/group, /etc/networks, printer.conf, sendmail.cf, etc) under SCM, and have since the late-80s (with RCS) at Convex. In large environments, I had cron jobs (also under RCS) that would auto-matically check-out the latest revision (on the main branch) of all these files sometime after mid-night.
tchrist (of Perl fame) helped set it all up (as he was running the marketing machines @ Convex back then.)
That kept the ‘undocumented changes’ and ‘who touched it last’ stuff to a low volume.
Oh ya, Brian Berliner (of CVS fame) worked at Convex, too. See: http://jogger-egg.com/jogger-what.html
(K was a roommate of mine for a while back then.)
Anyone, one would think that the author of the latest version of “sccs2rcs” would understand SCMs well-enough to adapt into something that is clearly attempting to be “version control”, without faking it.
There are still loads of ways to ‘subvert’ the browser, such as patching a copy of the browser on disk and changing links on the UI to point to the copy. There is nothing magical about either kernel-space or root; it’s simply a matter of scope.
From a recent post by D. Joe Anderson on linux-elitists. I think it fits here. Emphasis mine.
Seemingly, Eric can’t understand that he’s wrong about attack vectors requiring “wheel”, and his continued assertion that linux is secure because of its ecology: He doesn’t seem to want to understand that unix malware could take the form of a “useful” program (binary codecs, anyone?) downloaded from the net, that would either run in the same process space as the browser, or, failing that, could simply run itself as a daemon (perhaps it would name itself “fetchmail”), and then wait to pattach() new instances of the browser.
And we get defensive stances such as:
link
When I’d already cited the Aug 1984 Thompson paper in CACM earlier in the thread.
This whole thread is starting to remind me of the time esr flamed Firth: http://www.smallworks.com/blog/?p=380
especially when we are treated to rants such as:
Unfortunately, the mechanism for authenticating can be hooked using (amongst many other things) ptrace and used to run an arbitrary executable as root (at least on 10.4 which I am running). The relevant API is “Authorization Services”.
The novice then said to Master Foo, “But Master, an antivirus program is trivial to install and use, takes up negligible space and CPU time, and covers 99% of malware cases. What does your path have to offer me? Disciples of my school don’t have to put up with their sound not working.”
I actually had this argument the other day with a Windowsite. I’m still sore about it. Clearly I lack Master Foo’s inner peace.
I see Marshall’s second-latest posting got deleted.
I think a good portion of this has to do with differing mechanisms of software distribution. I very, very rarely execute, much less install, anything that didn’t come from the Ubuntu repositories in a signed package. If the average Windows user only executed code that came through the Windows Update channel, the botnet problem would be considerably smaller. (Of course, there are plenty of attack vectors which don’t require user authorization or consent, but the standard software distribution channel is at least part of it.)
As for Eric’s assertion that this only makes a difference because Windows is full of privilege-escalation holes… I still maintain that the gap between users running random flotsam that comes in over the wire and not doing so is far greater than the gap between users running random flotsam on a well-built system and on a poorly-built system. What good are all your MACs going to do when the README says “Super Fun Screensaver isn’t SELinux-compatible; run ‘echo 0 >/selinux/enforce’ as root to allow!”. Or, hell, “Super Fun Screensaver must be run as root; prepend ‘sudo’ to your invocation”.
Given that “sudo” can be interpreted by the end user as repeating a command louder and slower–consider eject doesn’t work, but eject, damnit! does as a usage pattern–how can you claim that the automated aspects of the system are more important than the ones that educate the user to not do bad things?
Ah, heck, Adriano pretty much covered this. In short, the weak link is installed between the keyboard and the chair; a major change in user behavior is worth more than a thousand pain-in-the-ass security infrastructures.
Are you sure? I know it’s certainly possible to enable it, but it’s certainly not enabled on upgrade to Intrepid, and it wasn’t enabled on a fresh Hardy install.
You can log keystrokes for an X session without root access and without alerting the user. It seems reasonably “useful”, as these things go. “Super Fun Screensaver” doesn’t even need you to run it as root while it’s sitting in the tray, sniffing your keystrokes and sending them out over IRC. (But hey, it’ll have to connect from a port over 1024. That’ll show those blackhats.)
Given that the individual packages are signed by keys which are put on the system at install time, even a spoofed router can only, at best, fail to inform you of new updates; any corrupted package will throw a warning. Am I missing something?
On that note, the chance that J. Random User will be checking “the right protocols and digital signatures” is roughly the same as the probability that J. Random User will check the SSL certificate on their bank’s website. (That’s not entirely User’s fault, though.) Between User and porn, or screensavers, or hot new torrents, there’s an annoying request to enter their password, and an annoying warning about an invalid signature. (If you’ve gotten annoyed by the password popups, you can add NOPASSWD to your /etc/sudoers and not even know when an app is spawned with sudo.) These things haven’t stopped people on Windows; why would they stop people on Linux?
Is this just wishful thinking? I’m reasonably certain that Firefox passes a .deb to Gdebi (or whatever) with as much security hoo-ha as it passes an MP3 to Totem; do you have some evidence otherwise?
Opening up a package manager as a media handler is one of those features that might be useful for a few people, in some instances, when they really, really know what they’re doing. At the same time, it opens up a whole world of hurt for everyone else. Yes, it makes a task easier, but it’s a task which shouldn’t be easy.
I miss the Linux Hater.
Ready for a nice *facepalm* ? http://www.wkowtv.com/global/story.asp?s=9667184
I know this one! The “copy of the browser on disk” belongs to root and can only be patched by root. Changing symlinks to it is a real threat, similar to putting “.” in your path. Although I’m in the minority, I don’t use icons. Aside from opening an xterm, where I type my commands (including “firefox &”).
But your SE Linux policy may allow gdb to run ptrace. Does it allow executables stored in a user’s home directory to run ptrace?
To run in the same process space as the browser it would be a plugin, and would be subject to security rules the browser has in place when running plugins. A binary kernel module codec, on the other hand, can only be installed by root, which is the point of Unix guys not doing their normal day-to-day functions while logged in as root.
That’s the point of SE Linux policies that would prohibit any program other than gdb from calling ptrace (or pattach).
That is a serious security flaw. Windows has the same security flaw, but it does need to be addressed.
Because it does not take up negligible space and CPU time, while using non-privileged user accounts does ( http://www.codinghorror.com/blog/archives/000803.html ).
> Ready for a nice *facepalm* ? http://www.wkowtv.com/global/story.asp?s=9667184
I hope that’s not meant to be an indictment of Linux.
> I hope that’s not meant to be an indictment of Linux.
I’m pretty sure it’s supposed to be an indictment of clueless MSM tech journalists.
I’m afraid you don’t know this one. A copy of the binary does not have to be owned by root. It can be owned by anyone with enough quota to store it. Unless you restrict desktop customisation heavily (that sounds like a useful UI), then malware running as that user is going to be able to fake out pretty much any program.
Fantastic. Unfortunately this does not solve the problem, it just makes the attack less elegant, as explained above. There is no end of ways to do ‘sneaky’ things in userspace. Hence the rise of antivirus programs on platforms for which there is an interest in developing malware.
I think that Marshal is right in that if the user is clueless enough, then whatever you, the developer, do a virus can do damage. However, if you extend this to the idea of antivirus software, then you easily get things like ‘Super Fun Screensaver sometimes triggers antivirus programs, but it isn’t a virus. If an antivirus program warns you about it, click Cancel’. I mean, if someone is determined to infect their computer, then nothing is going to do much about it.
I agree.
Aren’t delayed allocation and preallocation two key features that would even more reduce the likeliness of fragmentation compared to ext3…
Oddly enough, these are major new features in ext4, which is also getting support for online defragmentation. Go figure.
Verily, this is the year of Linux on the desktop!
At least, all the kids in the comments claiming that the Dell rep was right to send the customer an unfamiliar OS, and that it’s all her fault for not being able to learn a weird and unfamiliar OS out of the box, say so.
Free software is like Christianity–the worst thing about it is the damned fan club.
>Verily, this is the year of Linux on the desktop!
At least, all the kids in the comments claiming that the Dell rep was right to send the customer an unfamiliar OS, and that it’s all her fault for not being able to learn a weird and unfamiliar OS out of the box, say so.
Free software is like Christianity–the worst thing about it is the damned fan club.
The article references no problem with Linux at all–the problem was with Verizon, for using windows-only install discs; the class, for requiring MS Word; and the newspaper, for considering this sort of thing newsworthy.
“Someone doesn’t support Linux. Film at eleven.”
Yes, it is possible for people to make binaries in their own directories and to own those binaries. As a programmer who has used Linux as my Desktop for a decade I routinely write programs, compile them in a directory that I own, have the resulting binaries belong to me, and run them.
But (and this is the point) each and every Linux distribution makes each and every binary in /bin, /sbin, /usr/bin, /usr/local/bin, etc. and each and every system-wide library (1) belong to root, (2) writable only by root, and (3) only executable by users on an as-needed basis. So it is absolutely impossible to change the “copy of the browser on disk” because under each and every Linux distribution that binary belongs to root and can only be modified by root.
On the other hand, it is possible for a user to build his own version of Firefox, set permissions appropriately, and then change the symlink that’s on his own Desktop to point to his new version of Firefox. However, he cannot change somebody else’s symlink on somebody else’s Desktop.
As an attack, then, you need to have the user somehow set this in motion. And, of course, users are known to set things like this in motion quite regularly. So, as I originally said, this is an actual issue, and it’s similar to putting “.” in your PATH. Your only hope is that something like Tripwire will notice the change. But Tripwire is usually configured to not look at users’ home directories because there would be too many false positives there.
The problem with how Windows handles this, by the way, is that although all binaries belong to Administrator — and can only be changed by Administrator — normal user accounts are generally considered Administrators. The only reason is that a culture built up in the Windows world that many commercial programs think it’s perfectly reasonable for them to require super user access, so many commercial programs won’t run correctly on Windows unless the user running them is a super user.
And since most Windows users are effectively already running as root there is absolutely no reason to bother with privilege escalation attacks. Instead you just need to ask users to click on bad things because users really like to click on things so it’s not all that hard to convince them.
The discussion you were addressing was about whether malware on Linux could obtain banking details or keystrokes after being run, not about the integrity of system-wide binaries.
> Free software is like Christianity–the worst thing about it is the damned fan club.
Does this make Windows like Islam and MacOS like the CotFSM?
“I’m pretty sure it’s supposed to be an indictment of clueless MSM tech journalists.”
And users. Of course, I know there is no point in blaming users and cluelessness can be mitigated technologically – what I meant is to expect many of the virtues of Linux not as a collection of software but as a collection of software and users, as a social phenomenon – such as security – to drop as the clueless get their hands on it, which is imminent.
“The article references no problem with Linux at all–the problem was with Verizon, for using windows-only install discs; the class, for requiring MS Word; and the newspaper, for considering this sort of thing newsworthy.”
Not a bit of truth in there. Verizon made a CHOICE (ya’ know open source, it’s all about choice right) to support one OS over another (of course since most of the crap on the CD is useless it could have just been chucked by a knowledgeable user.) The class, and school as a whole probably, made CHOICE(there’s that word again) to use what is likely the most widely used office suite out there. And the newspaper made the CHOICE(oh my the world abounds with choices doesn’t it) to inform people to make sure they know what they’re choosing.
The average user doesn’t have the expertise that many if not most readers of this blog have. They are the reason that you can sue McDonald’s for having their coffee to hot – they just don’t know about these sort of things and rely on others to take care of it for them. Kinda’ like science, we don’t know all of this stuff about physics, or chemistry, just what people who enjoy it tells us. How often do you read every science journal out there about your favorite scientific subject. Shoot, lets just take one big item, smoking causes cancer. Have you read all the data, performed empirical experiments yourself? Probably not, most people don’t they rely on the experts, and in that articles case it was Dell that was the expert, and she trusted them.
So there weren’t any problems, just some fanatics getting their knickers in a bunch because not everyone supports their choice.
“> Free software is like Christianity–the worst thing about it is the damned fan club.
Does this make Windows like Islam and MacOS like the CotFSM?”
No, I think free/open source is more like Islam (remember zero, amongst other great scientific ideas came from Islamic and Arabic countries), in that there are fanatics in it that would like to Jihad against the Microsoft Devil. Mac OSX wants to be like Bhuddism – tranquil and user friendly. Windows is more like Christianity – there’s quite a few flavors, has some holes in it and… no comparing OSes to Imaginary friends just doesn’t work…
Unless you look at it like – an OS is like it’s work is never done, it’s always doing something in the background to keep your life working right, and most people really don’t want to notice it in everyday use.
Windows is like Scientology: It’s rather internally self-consistent except for when it’s not; it incorporates some interesting ideas but fits them into a framework so horridly psychotic as to make a sane person either laugh or cry. I’m not even talking about the fan base so much as I am the engineering culture that built the system.
Oh, and it costs a lot of money and its administrators are, if anything, adept at locking you into their system and making you pay for the next component.
Dear Tom-Dickson Hunt:
In my opinion, if you are going to proclaim yourself an expert on the security model of Linux, you should actually know how it works. Eric does not. He does not know even basic things like how ptrace works. He even _tested_ his (wrong wrong wrong) theory, and wrongly concluded that he was right. You cannot make credible comments on the viability of malware on Linux if you don’t know things like this. Instead we get silly “it hasn’t happened so it can’t” arguments, which are so unscientific as to be laughable.
Even if you are willing to forgive these incompetencies, it’s had to ignore hilariously over-confident comments such as:
Thinking you know more than you do, and never admitting to being wrong is the mark of an idiot. Admitting you are wrong is the sign of someone who is able to apparent truths regardless of their ego.
>Thinking you know more than you do, and never admitting to being wrong is the mark of an idiot.
Marshal, for example, apparently thinks ptrace is a magic wand that can subvert processes across security boundaries, and cannot be prevented from doing this by SELinux capabilities.
Quote me where I state that ptrace is not affected by SELinux. I am basically through here, since anyone with any technical chops on this board knows how clueless you are.
esr, Marshal, I think you both need to dial back the flamage and figure out what it is that the other is asserting. Do either of you disagree with any of the following statements?
1. Users cannot ptrace other users’ processes.
2. Absent SELinux, users have complete freedom to ptrace their own processes.
3. SELinux policies can limit ptrace.
4. No major Linux distribution currently ships with a default SELinux policy that restricts ptrace in any way relevant to this discussion.
I think this thread started to go off the rails way back here:
From this point forward, ESR and Marshal are talking about about two different scenarios. ESR is talking about the potential to place browser-downloaded applications into a sandbox, with security boundaries (different uids, and/or an SELinux policy) between it and the rest of the user’s desktop. Marshal is continuing the email worm discussion from earlier in the thread, and saying that if the user can be persuaded to save and run a malicious binary, then nothing presently shipped with Ubuntu prevents that binary from running with the full privileges of the user, and that it is then, if the user is in wheel, straightforward to obtain root through such means as a su trojan or ptrace snooping.
If I have all this right, then you’re both correct and both completely misunderstanding what the other is saying.
>If I have all this right, then you’re both correct and both completely misunderstanding what the other is saying.
Marshal forfeited the right to continue this argument by attempting to commit identity fraud; I will not allow him on this blog again. If you wish to argue the technical case, so indicate and I will respond.
Unless I’ve mischaracterized anything about your position in my above comment, then there’s nothing for us to argue over. I was trying to salvage the dispute between you and Marshal, but it appears I was too late to prevent him from dropping a tactical nuke on that bridge. Oh well. Good riddance then.
So Marshal is banned. As the Instapundit would say, Heh.
Well good riddance to bad rubbish I say.
Marshal established himself as an a-hole on the gaza thread.
I don’t know that I understand the “argument from the CLI” that I heard above from Max Lybbert, and at least alluded to as a ‘solution’ to the user-is-idiot problem…I will quote:
“Although I’m in the minority, I don’t use icons. Aside from opening an xterm, where I type my commands (including “firefox &â€).”
Ok, so if a user-level ‘badware’ had at some point been run, and had modified ~/.bashrc with PATH=~/bin:$PATH, then just installed BadFox at ~/bin/firefox, how does ‘opening an xterm’ and typing “firefox &” save you from badness? I could understand if you typed /usr/bin/firefox, but you say you don’t.
It’s nice when reading the whole thread is, most of the times, rewarding. The last comment pretty much summed up my concerns: there is no generalised solution to the PEBKAC problem, at least at the moment. The security models can be better or worse and that would be more or less reflected by the average number of infections, but still there’s no silver bullet — the ignorant security-unconscious user will find his way to shoot himself in the foot.
I tend to conclude that the architectural superiority of Unix, being true or imaginary, only affects the chances, but does not exactly resolve the problem. The education is to solve those issues, but here we’ve fallen way too far behind (with the possible exclusion of some crazy OpenBSD hackers) and the things are getting even worse, as the bar of “by no means I do care of how it works” has already been set by the businesses, while security is doubt and with doubt comes the knowledge.
I will now rejoin Master Bar’s meditation on the vanity of this sorry world.