I was reading about Jason Kincaid’s issues with Evernote, a piece of software that I am also dependent on (but luckily haven’t had any issues with yet). It reminded me of other software that I depend on that has or is currently failing me: iTunes, and 1Password.
The software I use every day today is different from the software I used every day a few years ago. The software I use in a few years will be different than the software I use today. Through decades of computer usage, I’ve realized that I can’t depend on my software, and that relying on it to exist and to work is folly. As we move towards subscription models for software, this will be ever more the case.
Let me talk a bit about how I work with my software now, in the form of a suggestion list. Some of my friends in the industry think it is surprisingly luddite, but it has reduced my pain over the years significantly. It means less work in some areas (like fixing busted software updates), and more work in other areas (maintaining plan-text backups), but as I moved to having everything digital, it at least lets me feel like my files are somewhat future proof.
- When you buy digital media, only buy DRM-free. I learned this lesson many years ago. In the early days of digital music there were many competing, protected, file formats. All of them died. If you bought Liquid Audio files, or one of the variants of Microsoft protected audio files, you were screwed. I buy DRM-free eBooks whenever I can (I love that O’Reilly sells their books DRM-free always). When I buy music, I only buy MP3 or Ogg files (this doesn’t seem to be a problem to find any more). TV or Movies are more problematic. I basically only buy those when I absolutely can’t get them any other way, and I assume that at some point in the future, I will have a way to remove the DRM or I will just lose them. If I can’t find a reasonably priced DRM-free alternative, I will sometimes buy the physical copy and rip or scan it instead of paying for DRM-hobbled versions.
- Only use software that lets you output archival file formats. Evernote let you output all your notes as HTML or XML. I backup my Evernote to these files on a monthly basis. I backup all my Outlook e-mail as mbox (plaintext) files (you can do this by dragging individual folders to disk). 1Password also lets you output as an XML file (although you want to encrypt this somehow in your backups). For the web services I use, I use ifttt to archive them either to Evernote, or Dropbox, or both.
Creative application files are difficult, but I have also learned this the tough way. I recently had to track down a copy of Adobe Illustrator CS5 because it was the last version of the software that worked with Freehand files. I was a big Freehand user and I never got around to putting my Freehand files into some application-neutral format. Now I make sure that I save a copy of everything in an uncompressed, full resolution, non-proprietary format so that I can get to it again if I need it. This takes up a lot of space, but space gets cheaper. Losing access to something that you spend hours, days or weeks on is worth the cost of a few extra GB.
I learned in my experience moving from iPhoto to Lightroom how painful it is when you use an application with virtual edits. Periodically, in LR, I output the edited versions of my images at full resolution so I won’t lose those if I can’t use LR anymore. The metadata is another problem, but LR at least save these in the sidecar or image files, so I can reconstruct them if I have to.
For audio apps, I output the dry and wet stems from each channel when I finish a track so that I can remix later and maybe use them as a guide if I want to try to reconstruct a track from the original files.
- Back up everything, multiple ways. What is the point of making sure your digital life is future proof if you can still lose it with a hard disk failure? In my case, I use CrashPlan for cloud backup. I also have a set of drives at work that are backups of my home drives. I bring each one home for one night just to back up its pair and the rest of the time it stays at work. In this way, I have three copies of every file. I was also contemplating another NAS backup within my house, but with over 6TB of data, that is a bit expensive for a fourth, redundant copy. I will probably do it anyway at some point.
- Save copies of the software that you buy/download. Unfortunately, everything these days is still authorized over the internet. This means that if you need to install an old piece of software to read a file that is no longer readable some other way, you might not be able to get it fully enabled, but you still might be able to use it in trial mode for a short period of time, long enough to recover that old file. Having the DMG for CS5 saved my bacon.
- Only update when you need/have to. This is the most controversial thing, especially because I have always made my living by selling (or renting) software (and upgrades) to people. This has also gotten a lot harder since the rise of the desktop app store and subscription revenue model. For my personal machine that I rely on and have no help or support on, I am very, very careful about when (or if) I update critical software. Before I apply a minor OS update, I always check the support boards to see if there are any issues. I almost never apply a major OS update to my personal computer. I actually can’t think of the last time I did this. The same goes for my critical (as opposed to fun) software. If everything is working on my machine and I’m able to get everything done, I prefer to leave it in that state rather than messing with my machine, possibly screwing myself up. There are a few exceptions. I will always apply security updates, for example. This doesn’t mean that I am always several years behind on software though. I update my hardware on a pretty regular basis, and usually when I do, I update all the software that I am currently using as well. I will still keep my old hardware around for a while, in case I need an old application for something.
- Keep your files organized. Having everything means you need to be able to find anything. The good part of keeping things in standard file formats means you can take advantage of your OS’s search capabilities, but you’ll still want a reasonable directory structure.
For software developers
It’s easy to ignore old operating systems and backwards compatibility. You can look at your analytics and say “no one uses that feature anymore.” I’ve made that calculation myself many times as an engineering leader. Still, it is worth making sure that your users have an exit or even a staying-put strategy. Especially, if you are building a service or subscription instead of an application. I used to use Gowalla. I put a lot of data into that service. When it went out of business, they put up a page promising a tool to download your data. I thought that was a classy way to go. That tool never appeared, and all that data was lost.
If you want to treat your users right, make them never regret using your software. If you are lucky enough to have your software last for a while, remember all the people who paid you along the way. Treat them with respect, and they will keep paying you into the future.
I just saw a MS Surface commercial where someone used it comfortably on an airplane tray table. They must be in mega first class because I’ve seen people try to use them on “real” tray tables. It’s hilarious. The keyboard sticks out over the too-small space between your body and the tray table, and the backend comically and continuously falls off the other edge.
The kickstand was literally the stupidest thing on the first version of that product. It was fine if you wanted to watch a movie, but it wasn’t even at a good angle for that most of the time. With the Surface vertical you can’t type on it, although with its’ weird aspect ratio, you can’t comfortably type on it anyway. Since the device wasn’t really useful without the keyboard, essentially you ended up having a laptop without a hinge. That laptop hinge has survived for decades for a reason. The reason is that it works, and it works well.
Try and use a Surface on your lap. You can’t type on the screen, and you need to be nearly horizontal (or amazingly long limbed) to even fit it with the keyboard on your lap. Did Microsoft only test this on tables? Just bad, bad, bad design.
With the second version of the Surface, they kept the kickstand, but they are now marketing it as a device for doing work instead of entertainment. Now the design is even stupider. The kickstand on the surface2 seems to have two positions, which is a slight improvement, but it is still worthless without a keyboard, and it still won’t fit on a tray table or your lap.
I can’t believe they are doubling-down on this.
For the record, at one time I had TWO surface RTs. I had my company purchase one for me when they were first launched. I seriously tried to use it and gave up after a couple weeks of frustration. The second was given to me by Microsoft when I attended the Microsoft MIX conference. That one I never took out of the box and eventually gave it away since I knew I would never use it.
someone is faking my skype # for robocalls. So I get a dozen people *69ing every hour. Some leave angry voice mails.
I never used it anyway, so I just cancelled the skype number subscription, thinking that it would actually CANCEL MY SUBSCRIPTION. Except Microsoft won’t cancel it until the subscription runs out. IN NOVEMBER. MS customer support never replied to my messages.
Will probably need to create a new Skype account, which is lame.
Running a phone service is hard, running an IP Telephony service is harder. I expect the same level of support that I would get from a telephone service provider, but I also expect that I should have complete control and access, just like any web service. Unfortunately Skype is doing neither in this case.
[Update March 3rd, 2014]
I finally was forced to upgrade for work reasons to 11.1.4. I found a suggestion on the Apple forum and decided to try that.
These were my steps:
- First I backed EVERYTHING up, my media drive and my iTunes library to a separate disk.
- Then I quit iTunes, moved my iTunes folder out of my user directory so that it wouldn’t get picked up after I restarted.
- I updated iTunes to 11.1.4.
- Then I launched the app and let it create a new iTunes folder and library.
- I made sure that all the sync settings were off, so that no apps or podcasts would be synced over iCloud.
- I quit iTunes
- I moved my iTunes folder back into my user directory.
- I relaunched iTunes and let it update.
So far, this has worked ok (for about 4 weeks for me). I periodically do a check of my library to make sure that no files have been lost and it looks ok for now. I have seen posts on the Apple forum that points to people still having podcasts deleted days after upgrading, so I’m going to continue to check ofter.
I will likely do a similar process every time I update iTunes from now on. I will probably also avoid updating any version as long as I can. Unfortunately, I’ve lost all trust in that application that I have been dependent on for years.
I also want to mention that a friend with contacts on the iTunes team actually forwarded a link to this post and the forum thread as well to some folks in the team. The response (not official, just person-to-person, second hand) was that this wasn’t an issue they thought was affecting many users and therefore it wasn’t a major priority for the team. That may be true (as an engineering leader, I’ve made that decision myself a few times), but as a user it is creating massive problems for, it is of little comfort. This issue may have been fixed by the team anyway, possibly, but the recent comment from Ed, seems to point otherwise.
[Update October 5th – There is a new version of iTunes, 11.1.1, in the release notes it claims that it fixed an issue with deleted podcasts. I installed it. It ran fine for a while (it didn’t fix the podcasts it broke, but it didn’t screw any more up), and then it hung, spinning beach ball. I had to Force Quit it after a few minutes. When I relaunched, it had COMPLETELY REMOVED MOST OF MY PODCAST SUBSCRIPTIONS AND UNSUBSCRIBED ME FROM THE ONES THAT WERE LEFT. Luckily, I had backed up before this happened and I was able to copy over my iTunes folder and relaunch which restored all my podcast subscriptions, until it beach-balled again AND REMOVED THEM AGAIN (I didn’t force quit this time). I then checked my file folders and of course it DELETED MY FILES WITHOUT WARNING, AGAIN!!! DO NOT UPGRADE TO ITUNES 11.1 IF YOU SUBSCRIBE TO PODCASTS! At this point, I once again have to completely reconstruct my podcast library due to poor Apple engineering.]
[Update September 23rd – The Situation is even worse than I thought. iTunes 11.1 is basically useless for podcasts now, see below]
I have been using iTunes since version 1 or 2. I’m not sure. A very long time (nearly a decade). When they added podcast support, I switched from the podcatcher I was using to iTunes and have been using it ever since to sync my podcasts.
While I don’t save every episode from every podcast I have ever had, I do save some of them, which means I have literally years of archived podcasts. Or rather, I should say that I HAD years of archived podcasts. When I upgraded to iTunes 11.1, what I didn’t notice was that Apple somehow unsubscribed me to some of my podcasts or it got confused as to my subscription state. Interestingly, it was the ones that I actually tend to listen to pretty regularly. When it did this, IT SILENTLY DELETED big chunks of the episodes that had been downloaded from those casts.
This is a data-loss bug, the absolutely worst kind of bug imaginable. A stop-ship bug, a never-release-until-fixed issue. Unfortunately Apple did release it. I didn’t notice that this had happened, but at some point, I got a warning about how I was running out of space on my system drive, so I emptied the trash. I noticed that it seemed like I had a lot more files than I expected, but I didn’t think that much about it (I generally leave files in the trash until I need space). A day or so later, I noticed that iTunes didn’t think I was subscribed to a bunch of my podcasts, and that those podcasts were now missing dozens of archived episodes.
So now I will spend the next several days restoring from my on-line and off-site backups and slowly reconstructing my podcast library. Unfortunately, I now also need to worry about what other files may have been quietly cleaned up by iTunes: music, ebooks, movies? If there are more, I may never notice.
In the end, it means that a piece of software that I have used daily and depended on for years and years can no longer be trusted. The effect of this loss of trust cannot be understated. It would be the first step to me looking for another solution; one that wouldn’t have me locked into Apple’s platform. This is why this kind of bug is so amazingly critical to catch and why missing it is not a small issue, but a catastrophic one for an ISV or IHV.
If you are a long-time user of iTunes, beware 11.1, and everyone should have MULTIPLE backups of their files, for just this kind of event. I’m very glad that I have a complete backup of all my files on a hard drive that I can use to restore and an on-line additional backup in case that drive is busted.
[Update September 23rd]
After several hours of re-downloading episodes and restoring from backups, I relaunched iTunes only to find that it had deleted those episodes AGAIN. This means that this wasn’t an issue with upgrading the database, but rather a much more serious issue. This is beyond a critical issue for people who have large libraries of podcasts in iTunes. It seems that it doesn’t affect other parts of the library, but I’m not sure I can trust that for sure. This is a major issue since I have several iDevices and switching to another application is basically out of the question for the moment. I now have to work around this bug and hope that Apple will eventually fix it while being wary of the app deleting files every time it is launched. As a user, this sucks.
Here is the Apple Support forum thread:
Microsoft finally unveiled the new much-rumored organizational plan. Glad to see Microsoft moving audaciously. This is long overdue.
However, knowing that organization, I don’t know if there is much chance that it will be successful. The whole organization has been set up to compete with each other for decades. This kind of cultural change is probably beyond what is possible at this point. The battle lines are too well established, the rivalries too set in stone.
The culture of Microsoft has always been one of intense competition. Successful individuals and managers rise more on their ability to outshine their peers rather than cooperate. A new high-level alignment or a single memo will not change that. If Microsoft really wants to be nimble and more collaborative, they need to clear house.
Furthermore, organizing engineering as massive silos that are parallel to the other massive silos representing other business functions is exactly the wrong way to do this. Every new effort will require coordination between massive groups with conflicting priorities, politics and agendas. Everything will be harder. The company itself is so massive that having responsibility for the success meet at the tops of these tall functional mountains will not be sufficient to make these efforts work. The people with responsibility will be too far away from the details to be effective. Layers upon layers of management (each with their own goals, agendas and success metrics) will need to be navigated to get any level of cooperation.
It’s going to be a tough few years for the employees at the company. For the front-line engineers, their day-to-day work will probably not change much, but at the higher levels, there is going to be tremendous pain as the new structure and corresponding power battles work themselves out. In the end, I expect very little will change on the inside, or the outside.
I’d be delighted to see Microsoft prove me wrong.
Amongst my many problems is the fact that I am a bit of a pack rat. Not bad enough to be on “hoarders” but bad enough that I have a hard time getting rid of stuff. My studio at home is cluttered with hundreds of books, CDs, DVDs, video tapes, papers and other assorted items I’ve accumulated over my life. Books are the toughest for me to part with. I’m always picking them up faster than I can finish them so the piles get larger and larger. Also books are the biggest shelf hogs off all the stuff I accumulate. Part of the problem is that even once I finish a book, I always assume that I’ll want it around to re-read or reference some day.
The answer is, of course, to stop buying new books until I make up some lost ground in my to read pile and just get over my fetishizing of
the books I’ve already read. Like any pack rat will tell you, that is pretty tough to do.
A more modern answer is to switch to buying e-books. This won’t fix my deepening pile of To-Read things (in fact it might make it worse because I won’t be able to see my physical pile of books to read), but it would address the clutter.
I love the concept of e-books. There are a lot of books that I buy that I won’t buy as e-books, like art monographs, but mostly I read non-fiction. For the majority of the books I read, the physical object really isn’t doing anything special for conveying the ideas. Most of the stuff I read would come across just fine on an electronic reader. To this end, I did get one a couple years ago. However, when I started to look into buying e-books, I was pretty disappointed.
I have a rule about DRM. I won’t buy any digital item with DRM. I’ve been burned several times over the years with vendors sunsetting their DRM schemes leaving their customers with a lot of bits they paid for but cannot access. DRM-free versions of e-books absolutely exist, but with such a high premium that they are often much more expensive than their physical counterparts. Even the DRM’d e-books are often as or more expensive than their physical versions, especially if they have already been out for a few years. So with the exception of a few O’Reilly titles, I basically haven’t purchased any e-books and have mostly just used my e-reader to read academic papers or other PDFs.
Last year, I purchased a Fujitsu ScanSnap scanner to help me address the piles of papers cluttering my desk, file cabinet and boxes in the garage. This was the answer to my pack-rat ways. It allowed me to have digital, searchable, copies of every piece of paper I ever wanted without having to actually keep the physical piece of paper. As I said, it also meant that I have a searchable archive, thanks to DRM. I’ve slowly been working my way through all my clutter, one file folder and one box at a time and it feels liberating. I’m finally clearing out magazines I’ve saved for 10 years to read one article and ridiculous crap like that. My recycle bin is always full.
Today, I finished reading Daniel Pink’s Drive. I read most of it a while ago, but it was sitting on my nightstand for a year or so while I read other books until I got around finishing it. I won’t review it here, other than to say that it was a pretty good book, but if you watch this video and understand the concept, you really have no need to buy it. This was a book that I thought was pretty good, but it didn’t say anything to me that I didn’t already know. What I should have immediately done was put it in a box to donate to a library, or given it to a friend, or a clueless boss, or something. Instead, I went to find a place for it on one of my overwhelmed shelves.
Then I spied my scanner.
I realized that this physical book didn’t have anything special about it. It came from a computer file, was printed on cheap paper and was actually the worst manifestation of the ideas from a standpoint of me being able to reference it again. If there was something I remembered from this book that I wanted to look up: I’d need to remember that it came from this book instead of from another one, then I’d need to remember where I put the book (home, work, a box in the garage), and then I’d need to actually find the section of the book that I was looking for. These days, I probably wouldn’t get past step one. I’d google for my answer and then never go to step two.
I decided to see how hard it would be to turn my physical book into an e-book for future reference. It was actually really easy. The whole process took less than twenty minutes.
First I got the tools…
I ended up not needing the smaller box cutter, the bigger one worked great.
I clamped the book to my desk. It is upside down because I’m right handed and I didn’t want to slice my fingers off. The ruler was only necessary for the first couple passes, but I kept using it as a finger guard. I put the ruler a bit in from the spine of the book and just got to work.
I figured that it was going to take a really long time to slice through a whole book with an admittedly dull box cutter, but actually it took nearly no time at all.
This was maybe 8 times through with the box cutter in a 260-some page book.
Before I did this, I figured this was going to take me for ever. It probably took me more time to get all the tools together than it did for me to finish slicing off the spine.
I just started feeding pages into the scanner. That went quick.
Man, I love the ScanSnap.
It felt a bit weird, throwing a book into the recycling bin. I had a bit of a hard time with that. Part of me was ready to find a jumbo binder clip so I could still keep the book. That is really how my mind works.
I used Acrobat Pro’s OCR engine on the PDF generated by the ScanSnap. The original PDF was 26MB. After OCR, it was less than 11MB and more legible. The OCR went pretty quick. I guess this is about the best possible case for an OCR engine, so that shouldn’t be too surprising.
And here is my new e-book on my virtual bookshelf.
And here it is in the iBooks reader app.
The nice thing is that I could also read it on pretty much any e-reader, computer, or mobile device with a screen. That is the genius of open standards and DRM-free files. Even if some day the PDF format dies, I know that I’ll be able to take my book to whatever the next format or reading device is. Just like a real book.
Cross-posted from my old Adobe blog
I’m privileged to once again be speaking at the SC conference. For those who don’t know it; “SC is the International Conference for High Performance Computing, Networking, Storage and Analysis.” If you are attending, I’ll be on a panel entitled Parallelism, the Cloud, and the Tools of the Future for the next generation of practitioners. I’ll be joining some of my compatriots in the Educational Alliance for a Parallel Future to once again discuss the skill sets that collegiate computer science programs should (and mostly aren’t) imparting to their students in the areas of parallel programming.
The abstract for the panel is as follows:
Industry, academia and research communities face increasing workforce preparedness challenges in parallel (and distributed) computing, due to the onslaught of multi-/many-core and cloud computing platforms. What initiatives have begun to address those challenges? What changes to hardware platforms, languages and tools will be necessary? How will we train the next generation of engineers for ubiquitous parallel and distributed computing? Following on from the successful model used at SC10, the session will be highly interactive, combining aspects of BOF, workshop, and Panel discussions. An initial panel will lay out some of the core issues in this topic with experts from multiple areas in education and industry. Following this will be moderated breakouts, much like collective mini-BOFS, for further discussion and to gather ideas from participants about industry and research needs.
If this sounds similar to the session from the Intel Developer Forum in September, there is good reason. It was the second most popular session of that conference. The IDF panel and breakout sessions covered some really interesting ground, and I really liked the format. I felt like the discussions I had with the people in my subgroup at IDF were deeper, more specific and more productive than a traditional panel format would have been.
While the speakers in this panel are different than the one in September, I think we’ll still end up splitting on the axis of using abstractions to teach fundamentals vs teaching from the first principles up. Which camp you are in seems at least somewhat determined by the fact that a number of panelists produce abstractions over the low-level elements as part of their work. I am very much in the fundamentals camp as I think that understanding what the abstractions are built on is fundamental to choosing the right abstraction, much as artists tend to start with representative figure drawing. What will make an interesting difference from IDF is the number of audience members who come from outside of computer science (HPC is used more by scientists for whom the computation is only a means to the end of solving a problem in a non-computational discipline). Those audience members are less likely to understand the fundamentals, nor care. For them parallelism is just a tool to get their answer faster. This should really make for a lively debate!
My statement for the panel is as follows (yes, I did crib the last paragraph from my earlier position):
The team I manage is building a single, modern, software product. A few years ago, that would have meant a desktop application written primarily in C++, most likely single-threaded. Today, it means software that runs on the desktop, but also on mobile devices and in the cloud. Working in my organization are developers who write shaders for the GPU, developers who write SSE (both x86 and ARM), developers using distributed computing techniques on EC2 and threads everywhere throughout the clients and server code. We write code in C, C++, ObjC, assembly, Lua, Java, C#, Perl, Python, Ruby and GLSL. We leverage Grand Central Dispatch, pThreads, TBB and boost threads. How many of the technologies that we use today in professional software development existed when we went to school? Nearly none. How many will still be used in a few years from now? Who knows. The reason we can continue to work in the field is that our education was grounded not just in programming techniques for the technology of the time, but also in computer architecture, operating systems, and programming languages (high level, low level and domain-specific).
Learning GPGPU was much easier for me because I could understand the architecture of graphics processors. I was able to understand Java’s garbage collection because I understood how memory management worked in C. I chose TBB over Grand Central Dispatch to solve a specific threading problem because I could evaluate both technologies given my experience
We’re doing students a disservice if we teach them the concepts using high-level abstractions or only teach them a single programming language. Having an understanding of computer architecture is also critical to a computer science education.
These fundamentals of computer science do not necessarily need to be broken out into their own classes. They can and should be integrated throughout the curriculum. Threading should be part of every course. It is a critical part of modern software development. Different courses should use different programming languages to give students exposure to different programming models.
If I was a Dean of Computer Science somewhere, I¹d look to creating a curriculum where parallel programming using higher-level abstractions was part of the introductory courses using something like C++11, OpenMP or TBB. Mid-level requirements would include some computer architecture instruction. Specifically, how computer architecture maps to the software that runs on top of it. This may also include some lower level instruction in things like pThreads, Race conditions, lock-free programming or even GPU or heterogenous programming techniques using OpenCL. In later courses focused more on software engineering, specific areas like graphics, or
larger projects: I¹d encourage the students to use whichever tools they found most appropriate to the tasks at hand. This might even include very high level proprietary abstractions like DirectCompute or C++AMP as long as the students could make the tradeoffs intelligently because of their understanding of the area from previous courses.
You can read the position statements from the rest of the panel here.