the do-anything-you-want design of computers nowadays isn't the right answer for most people.
Being able to set your desktop background, and change all your icons and create directories and install random software has done nothing for this world but advance the world of support problems.
I personally don't care for it, but the iphone way of giving you a small menu of options and only letting you do a few things that they let you is really a better answer for the majority of consumer electronics users.
this is true for iphones and dvd players and tvs. Remember when they started putting zillions of options on TVs and nobody used any of them?
Remember when they put clocks on VCRs and they all blinked 12:00?
People don't really want infinite configurability, and that is really the fundamental problem with the PC design.
linux following in their wake wasn't a terribly brilliant move either. And don't be surprised if a lot of options go away on the MAC desktops as well.
One linux-based product is already poised to overtake Apple. Android, which is a linux-based software stack, is growing fast. By some measures it has already overtaken the iPhone; by most other measures it will overtake iPhone sometime in 2011 or 2012 ... for the same reason Windows 3.1 dominated over Apple when PC's first arrived on the scene.
Your initial point is correct, though. People don't *want* their mobile devices to look or quack like desktop computers. Keeping the environment fairly rigid is important for a low-power portable device -- Palm figured that out a very long time ago.
By the way ... Apple did the Linux world a big favor. Desktop linux systems have had online software repositories for a long time, but Windows users were accustomed to "put in the CD and run SETUP.EXE". Now, thanks to Apple, people are totally aware of how an "app store" works.
As a result, the argument of "oh, to use Linux you have to learn how to compile software from source code" (which is of course a myth, or FUD if you prefer) is now easily refuted. If the Linux in question is Ubuntu, for example, you can just say "No, it has an app store. Here it is. See?"
Once again you are falling into the trap of thinking of "Linux" as a
single product controlled by a single entity.
HA HA!!! You ADMIT it is a TRAP! :-)
You're right, though, I mean ubuntu, but any intended-for-desktop distro.
Your initial point is correct, though. People don't *want* their
mobile devices to look or quack like desktop computers. Keeping the
My point is most people don't want their desktop computers to look or quack like desktop computers either.
Microsoft's decline may not be slow. IBM's was slow for the period. Big Blue went quick after they sold their basic stuff to National Cash Register. MS may start selling divisions off.
They kaboshed their flight simulator dept, no? They could have sold that.
What I keep hearing about .Net is that C# is an elegant language, but
the .Net runtime is typical Microsoft rubbish (basically a GC'ed
version of Win32).
Wondering where you're hearing that, aside from your usual anti-Miguel sources. .NET's runtime is also quite elegant and has a few key efficiency features that are missing from Java's overly-simplistic memory model: value types, and reified generics. The importance of reified generics can't be overstated; it really cuts down on runtime memory overhead by a significant factor by allowing value types and primitives to be stored inline.
Some of the libraries are also quite nice. IEnumerable/IQueryable are serious functional-programming heavyweights.
memory model: value types, and reified generics. The importance of
reified generics can't be overstated; it really cuts down on runtime
erm... I do so enjoy reading up on all this stuff you throw our way, that's my new way of keeping up with my resume acronyms, but having read exactly 2 paragraphs on the subject I am now an authority, so let's debate for a minute exactly why C#'s way is better.
I mean from a design point of view, sure it's better that it's part of the entire system design and not just a language feature, but if I'm reading it right, java throws out all the generic type stuff (which is fine by me) and doesn't put it in the byte code and C# does.
So C# has to spend more time dealing with generic types at runtime.
How is this better? (you know me, it's all about performance)
I'm probably not getting the whole picture, but if all generic goofyness for a type simplifies to the same thing in java, and doesn't in C# doesn't that just make C# code more bloaty while functionally doing the same thing?
If the compiler has verified that everything is type safe, why waste runtime cpu dealing with it at all?
I remember the see-carpet pet-shop being 1/3rd loc of the java petshop.
Java generics can generalize over object types, only. They can't generalize over primitives (they handle that case by autoboxing primitives to objects) and they can't generalize over value types, because java doesn't have value types.
So in Java, you can never define a "struct foo" and an array of struct foo, and have that array be stored as a linear list of struct foos all inline, it has to be an array of pointers to object foo instead. My understanding is that C# can do that... and the collection classes can also do this in a generic fashion. The memory overhead can be a rather large constant multiplier. Collection traversal is also quite a bit slower..
So, C# can do this because it preserves the type information at runtime, which allows it to basically treat generic classes as templates if it decides that it's necessary.
Aug 5 2010 2:39am from dothebart @uncnsrd
I remember the see-carpet pet-shop being 1/3rd loc of the java
probably mostly because of properties vs setters/getters.
So, C# can do this because it preserves the type information at
runtime, which allows it to basically treat generic classes as
templates if it decides that it's necessary.
C# can do that as a result of the fact that java doesn't have value types. Forget the generics for a second, I gather C# can also line up an array of whatever objects in memory, and java can't simply because java was not designed to allow for that. The generics just make doing that with random types possible, but the actual flaw (limitation whatever) is in the everything-is-an-object-except-for-the-primitive-types-hack design of java.
That's true enough, as far as it goes. In Java, though, it would be nice to have a generics that was capable of representing primitive types, to make numeric computation faster... (Not that this is a use case that I care about, but it would really help the number crunching guys.)
That's the way Scala does it, on both counts.
Oh, but it's a sloppy language right? not big on types?
The syntax will probably take some getting used to. The fundamental idea is: (a) every construct should be as general as possible; (b) everything is an expression.
(a) requires some meditation and long explanation, but
(b) allows you to do things like this:
val myvariable =
if (something) then
val transformedList =
for (x <- someOtherList)
It's not a sloppy language, it's a strong/static type system that closely resembles Java's, with a few differences. They have added a form of structural typing (aka duck typing) for classes that is statically checked at compile time. In other words, you can declare "this parameter accepts any object that defines fields named x and y", and this will be statically checked at compile time (not to be confused with Objective-C/Smalltalk like systems where all such checking is at runtime.)
Function types are also structural types (as opposed to nominal types as in C# with its delegate types.) Function types don't have names, they just declare a matching signature, and are statically type checked.
You might look at "val myvariable" above and think "that looks like a sloppy language." But behind the scenes, the compiler knows the type of "myvariable" at compile time, via type inference. (There are several situations where you must declare the type when the compiler can't figure it out on its own.)
The IDE has a mouse hover to indicate the type of a variable.