There's another problem with Japan.
Japan is the only place that makes tape media used by broadcasters, from what I understand. That stuff just got very, very expensive.
They should probably ask themselves why they're still using tape media :)
I asked that, too.
These days, most of these guys are using digital tape, no less... which just seems bizarre to me.
The broadcasting industry is only just now starting to dip their collective toe into the digital age. This industry is so freakishly far behind the technology curve, it's amazing they can get anything done at all. But then, I suppose this is the industry that hasn't realized that they are soon to be obscoleted by the internet.
Being in the digital video storage marketplace would probably be a good place to be right now.
This industry will see some changes pretty soon anyway. Getting away from tape can only be a good thing.
While we aren't involved in digital video storage directly, we are a very helpful component of it. I just hope we get a little more popular as a consequence.
I honestly do not know why the industry uses tape. It's very, very expensive for how they use it, and these days you could store everything on digital media, which more cleanly transfers. Tape isn't very reliable, and I've seen some terrible problems come from it (e.g. garbled captioning, but then, that company used tape in an unapproved way).
I think the industry is just resistent to change.
got my bluetooth working.
Needed to install the firmware:
apt-get install firmware-atheros
and have a recent 2.6.39 kernel running which contains http://us.generation-nt.com/patch-add-atheros-bt-ar5bbu12-fw-supported-help-202165552.html this patch.
after that this device:
Bus 004 Device 006: ID 0489:e02c Foxconn / Hon Hai
Bus 004 Device 003: ID 0cf3:3005 Atheros Communications, Inc.
it seems as if it sometimes doesn't work well during bootup, the RF-Kill switch (FN+F3) for un/re-plugging the device does the job. Note that you need to press it several times to switch on/off wlan+bluetooth, have a tail on syslog/messages open to see whats going on.
so now there are just two things about this acer:
- won't wakeup after suspend
- won't switch to console / back / restart x correctly...
and with the 4g ram its pretty good useable. was a little fiddling to get the keyboard out, and open the "door" on the lower side... but now its working smoothly.
now this is damn cool:
(next to 'to cool to be true')
That's impressive. Even with all the limitations, and the lack of a network stack (I could see where that would be difficult), that's an awesome page there.
I could imagine using it to help teach people how to write C code, honestly.
But he already had experience in the field so it's not like one of us starting off on it.
I wanted to run a hypervisor inside it so I could boot up another copy of Linux, start up a browser in it, and then run his demo emulator inside that.
Ah, now *that* would be pretty nifty. But he needs a network stack that can get to the internet, which he currently lacks.
I just did something that sort of underscored the amazing qualities of an optimizing compiler, and the power of inlining code.
I wanted a base64 encoder/decoder. But, you know, I'm picky. I want it to have this certain sort of interface, with such-and-so features, etc... and I figured I probably wouldn't actually find anything out there that does this for free.
But, I did find something that someone released into the public domain. He wrote it primarily in C, and gave it a C++ wrapper. The thing wasn't quite as flexible as I wanted, and it was slightly dangerous to use (potential for buffer overruns, and permitted using functions that should have been made private), so I rolled up my sleeves and did some hacking.
First, I wanted to only include one header file... no object files, no compiling of .c files, none of that... just include one header and have the whole thing right there for you. I also wanted to be able to choose the alphabet for the encoder, while making commonly-used alphabets easy to use. I wanted it to be fast, and I wanted to be able to either use std::i/ostreams or just plain old std::string objects. And, of course, I wanted it to be safe. Note that I didn't care about maintaining a C interface... just pure C++.
After making all of this happen and getting it to work correctly, I thought I'd check on the code's performance. I had to resort to a very high resolution timer, since I wasn't working with a huge amount of data. I was pleased with the results... it seemed very fast indeed. But then I wanted to compare it to the original code.
I was stunned. My code wasn't just faster... it blew the original code out of the water. My original code, using just strings and no std::i/ostream objects, was so much faster that I implemented the std::i/ostream objects just so I could try to compare apples for apples (since I figure the stream objects require more processing power to work). There was negligable difference... it was still blisteringly fast. The original code encoded my sample data in 283162/2604228ths of a second. My equivalent code only took 2866/2604228ths of a second (sorry for the weird numbers, but it was the most accurate clock I could find on this Windows OS). Decoding had similar gains... theirs: 152928/2604228ths of a second, mine: 13331/2604228ths of a second.
The thing is, I'm using the same algorithm he is using, and I'm even using the same technique (a quirk of the language allows you to not close your blocks normally within a switch statement, which makes it kind of nice for working with simplistic state-machines). I didn't improve on any algorithm for doing this in any way whatsoever. Really, the only significant differences I can find is that I'm not compiling the encoder in its own object file, and the code I wrote isn't in C, but C++ (which is kind of a semantic difference, really, but maybe the compiler is different, I dunno). Everything else is sorta syntactic sugar to make working with my objects easier and more flexible than his. In fact, theoretically, I traded performance for flexibility in the decoder, because I am using a hash table to look up values instead of a straight map (needed a way to specify my alphabet, which makes the decoder take a performance hit since I can't just make a static array).
So, either having it compiled inline makes it that much faster, or maybe the optimizer does a nicer job when you work with C++ than C, despite similar code.
Here's a document on that language quirk (which works equally well in C) if you're interested:
I should figure out how to package this so you guys can look at it yourself if you want.
Oh, that URL doesn't just mention a language quirk... it also points out something kind of cool from Knuth's Art of Computer Programming. If your life seems to revolve around pushing computers around, you might want to look at it. It's kind of interesting.