In hindsight, no one claimed that a Raspberry Pi of any model was a powerhouse machine. However, I had thoughts about how to benchmark computing power after leaving Dreamhost and noticing that even tar and gzipping my WordPress installs performed like an ancient machine despite the processor statistics. That made me think about a raw benchmarking metric that could deployed anywhere. I was also learning Go and trying to understand Go concurrency. I ended up with a "concurrency" of 2 because the lower power systems didn't get any more out of an additional goroutine.
The program requires 13,135,013 sha3 hashes and conversions to base 64 to come up with a string that starts with "TEST".
My work workstation's output:
[Specs: Late 2013 Retina MacBook Pro 15", 2.3 GHz Core i7 - (I7-4850HQ)]
A ThinkPad 11e with A4-6210 APU (1.8 GHz - Windows 10 Home)
Single thread: 135 seconds
Concurrency of two: 65 seconds
Compared with PassMark:
Single Thread 697
Raspberry Pi 3B
Single thread: 1265 seconds
Concurrency of two: heat warnings!
For the purposes of this "performance" benchmark, there is definitely a non-linear relationship between a canned benchmark score and the score in a brute force operation that has a heavy calculation component. I'm also isolating only one aspect of system performance. There is a SATA SSD on the newest machine, a SATA HDD on the 2011 and 2009 machines, and an SD card on the Raspberry Pi. The RAM is different in every machine as well. Still, it was a fun experiment that motivated me to spend a little extra time in Go and learning Go Concurrency.
I love my Seiki 4k Display for writing code on my Mac. The one downside is that it times out and goes to sleep after 2 or 4 hours of inactivity (from the remote, not the screen). I decided to fix this problem in a somewhat complicated way: With an Arduino Uno that I already had, and an IR control kit.
I used the IR receiver to find the Seiki "Volume Up" and "Volume Down" codes, which appear to be structured like NEC codes. Those are hardcoded because I haven't really bothered to refine the setup or code any more. The IRBlaster library uses digital out 3 for the IR sends, and I used digital 8, 9, and 10 for red, yellow, and green indicators to give some indication of where in the refresh cycle things are. The code as-is sends the volume up and volume down signals every hour or so, and the indicators start with a slow blink on red, a slightly faster blink on yellow, and a fast blink before remote send on green.
Here's a nice shot of the nVidia driver context menu wackiness in windows XP. The longer a window is open, the more of these that appear. (Considering the window in question is Safari, that time span is probably 2-3 hours.)
Meanwhile, I spent 10-15 minutes trying to get the laptop to recognize that my external monitor was plugged in, and then trying to close windows for a graceful shutdown.
Faster boots: Read time bias and slimmed-down OS have helped exaggerate these results.
Battery life: Other devices are still consuming the same amount of power, which is not insignificant in the power usage. My counterpoints:
Max battery life power management settings would actually make for a usable computer without the need to stop the hard drive as a primary power saver.
Optical drives are not an option on many notebooks with SSDs (MacBook Air, sub-notebooks), so the example of watching a movie on DVD draining battery would not be a possibility on those notebooks.
I see some misses on this article:
What about heat generation/dissipation? Would SSDs make less of a contribution to system heating, allowing for less fan usage or no fan at all? (Thus saving power)
What abou mean time between failure (MTBF)... A commenter on the article remarked on this. The experimental MTBF indicates significantly longer lifetimes, but may have been in a read-heavy environment. Indeed, write endurance may be a concern, but at 100k-300k write cycles that would still mean 20+ years before failure. Of course, then there was this SSD failure debacle. I've had a compact flash card in a camera fail on me within a month, and there isn't a chance for recovery before failure.
Shock resistance 1500G/0.5 ms vs 300G/2.0 ms and 160G/1.0 ms.
I'm looking for input on what your home and/or small business backup strategy is.
What? You don't have one? Well, mine has been spotty at best.
I originally bought my Dell XPS desktop as a "Scratch & Dent" machine. It has a RAID controller, but only came with a 250 GB hard drive, and the RAID software reported back that the RAID volume was degraded (because it was missing the second drive of the volume). I eventually replaced the hard drive with two 500 GB hard drives, and opted not to enable RAID 0 or 1 for those two. Unfortunately, I didn't enable mirroring RAID, but fortunately, I didn't enable RAID 0, or 1 TB of data would be lost now.
My boot drive (which of course, houses all of the pictures, iTunes "imported" music, etc. in the My Documents folder) died this morning. This was not the classic slow death that I'm used on older drives with slower spindle speeds. This drive now sounds like a chainsaw.
I originally created the boot drive by using Symantec Ghost to resize the original 250 GB drive to a 500 GB drive, so I have that old backup still available. In addition, I just refreshed the backup of my pictures to the 2nd drive, otherwise, those would be about a year out of date.
My prior backup experiences:
Backup pictures to DVD+/-RW using Nero, copying directory structures to backup so that the DVDs would be usable as standalone discs.
Backup pictures to DVD+/-RW using Nero or other backup software, using disc spanning and propriety backup formats.
Backup pictures to USB 2.5" HD
Backup music and pictures to the shared drive of a wirelessly networked PC that is used for little else.
Symantec Ghost backups for dying drive recovery and backup.
I now own a 1 TB USB drive for backup purposes, but I'm torn between Ghost managed backups, Ghost images, or some non-proprietary format for my backup solution.
Imagine the applications for carrying 500GB-2TB of data in a form smaller than your little finger’s fingernail. To me, up until now, desktop-scale SSD storage has meant 2.5” hard drive form factor magnitude.
Imagine the implications to:
The netbook/sub-notebook evolution - the MacBook Air suddenly looks like a luggable?
The evolution of smartphones - Savvy users may be able to have them as desktop replacements.
Optical media - Why would you ever wait 30 minutes to burn a DVD-DL again?
The entertainment industry
200-1000 exact copies of DVDs could be stored on a single disc. Good luck detecting a microSD in Customs.
High Quality HD Video Cameras could be the size of a small point-and-shoot camera. YouTube becomes small potatoes compared to the amateur filmmakers whose hobbyist movies start competing with professionally produced movies.
The software industry
How much damage could a virtually undetectable 500GB drive connected to the network do?
How much damage could a misplaced 500GB drive the size of a fingernail do?
The power consumption of such small drives could make current SSDs look like power hogs.
On a personal level, I'm just as intrigued at the possibility of exFAT/FAT64 being introduced as the file system for removable storage. FAT32 does not make for good removable storage once you get to the 2-4 GB range. If you have several small files, they quickly eat up space on disk, despite not taking up even a 1/4th of the actual space. I'm also hopeful that exFAT will have better support outside of Windows than NTFS.