Making a big change

Keep Calm and Revel On
I didn’t make this poster, but I love it.

Coming to Adobe was a dream come true for me. Someone first showed me Photoshop on a Mac SE after hours at the Center for Art and Technology at Carnegie Mellon back in 1989 or 1990. It was captivating to a computer science student with a deep interest in imaging and graphics. I knew that someday I would work there. It wasn’t a direct path, but I did get there eventually.

In my nine years at the company, I have been able to work on some intensely cool projects: Adobe Image Foundation, Pixel Bender, and Revel. Each have been technically challenging, but each have also had an impact for Adobe customers. Solving cool technical problems is fun, but doing it in a way that millions of users can benefit from is massively rewarding. I am grateful that being at Adobe has allowed me to work on such personally and professionally gratifying projects. I am also grateful that I have been able to work with some absolutely stellar teams.

Adobe is the best company that I have ever worked for, but it is time for me to make a change. This will be my last week there.

To all of Adobe’s customers: I hope that my work has helped make the tools you use a little bit better, faster, and more stable. It has been a joy to build stuff for you. Thank you.

Now, I’m looking forward to my next adventure. I will be joining Spotify in Stockholm as Director of Engineering in a few weeks.

This is my new dream, and I am incredibly excited about it. The people that I have met at Spotify are intelligent, creative and passionate. They are working to change the world; making available all the music in the world to anyone, while making sure that the people who create the music that we love can do it as a profession. This has been my mission multiple times in the past, in the days before I came to Adobe. I’m excited to pick up that banner once again and do my best to help it become a reality.

Things are gonna get interesting, stay tuned.

Speaking this week at the SC11 Conference in Seattle

Cross-posted from my old Adobe blog

I’m privileged to once again be speaking at the SC conference. For those who don’t know it; “SC is the International Conference for High Performance Computing, Networking, Storage and Analysis.” If you are attending, I’ll be on a panel entitled Parallelism, the Cloud, and the Tools of the Future for the next generation of practitioners. I’ll be joining some of my compatriots in the Educational Alliance for a Parallel Future to once again discuss the skill sets that collegiate computer science programs should (and mostly aren’t) imparting to their students in the areas of parallel programming.

The abstract for the panel is as follows:

Industry, academia and research communities face increasing workforce preparedness challenges in parallel (and distributed) computing, due to the onslaught of multi-/many-core and cloud computing platforms. What initiatives have begun to address those challenges? What changes to hardware platforms, languages and tools will be necessary? How will we train the next generation of engineers for ubiquitous parallel and distributed computing? Following on from the successful model used at SC10, the session will be highly interactive, combining aspects of BOF, workshop, and Panel discussions. An initial panel will lay out some of the core issues in this topic with experts from multiple areas in education and industry. Following this will be moderated breakouts, much like collective mini-BOFS, for further discussion and to gather ideas from participants about industry and research needs.

If this sounds similar to the session from the Intel Developer Forum in September, there is good reason. It was the second most popular session of that conference. The IDF panel and breakout sessions covered some really interesting ground, and I really liked the format. I felt like the discussions I had with the people in my subgroup at IDF were deeper, more specific and more productive than a traditional panel format would have been.

While the speakers in this panel are different than the one in September, I think we’ll still end up splitting on the axis of using abstractions to teach fundamentals vs teaching from the first principles up. Which camp you are in seems at least somewhat determined by the fact that a number of panelists produce abstractions over the low-level elements as part of their work. I am very much in the fundamentals camp as I think that understanding what the abstractions are built on is fundamental to choosing the right abstraction, much as artists tend to start with representative figure drawing. What will make an interesting difference from IDF is the number of audience members who come from outside of computer science (HPC is used more by scientists for whom the computation is only a means to the end of solving a problem in a non-computational discipline). Those audience members are less likely to understand the fundamentals, nor care. For them parallelism is just a tool to get their answer faster. This should really make for a lively debate!

My statement for the panel is as follows (yes, I did crib the last paragraph from my earlier position):
The team I manage is building a single, modern, software product. A few years ago, that would have meant a desktop application written primarily in C++, most likely single-threaded. Today, it means software that runs on the desktop, but also on mobile devices and in the cloud. Working in my organization are developers who write shaders for the GPU, developers who write SSE (both x86 and ARM), developers using distributed computing techniques on EC2 and threads everywhere throughout the clients and server code. We write code in C, C++, ObjC, assembly, Lua, Java, C#, Perl, Python, Ruby and GLSL. We leverage Grand Central Dispatch, pThreads, TBB and boost threads. How many of the technologies that we use today in professional software development existed when we went to school? Nearly none. How many will still be used in a few years from now? Who knows. The reason we can continue to work in the field is that our education was grounded not just in programming techniques for the technology of the time, but also in computer architecture, operating systems, and programming languages (high level, low level and domain-specific).

Learning GPGPU was much easier for me because I could understand the architecture of graphics processors. I was able to understand Java’s garbage collection because I understood how memory management worked in C. I chose TBB over Grand Central Dispatch to solve a specific threading problem because I could evaluate both technologies given my experience
with pThreads.

We’re doing students a disservice if we teach them the concepts using high-level abstractions or only teach them a single programming language. Having an understanding of computer architecture is also critical to a computer science education.

These fundamentals of computer science do not necessarily need to be broken out into their own classes. They can and should be integrated throughout the curriculum. Threading should be part of every course. It is a critical part of modern software development. Different courses should use different programming languages to give students exposure to different programming models.

If I was a Dean of Computer Science somewhere, I¹d look to creating a curriculum where parallel programming using higher-level abstractions was part of the introductory courses using something like C++11, OpenMP or TBB. Mid-level requirements would include some computer architecture instruction. Specifically, how computer architecture maps to the software that runs on top of it. This may also include some lower level instruction in things like pThreads, Race conditions, lock-free programming or even GPU or heterogenous programming techniques using OpenCL. In later courses focused more on software engineering, specific areas like graphics, or
larger projects: I¹d encourage the students to use whichever tools they found most appropriate to the tasks at hand. This might even include very high level proprietary abstractions like DirectCompute or C++AMP as long as the students could make the tradeoffs intelligently because of their understanding of the area from previous courses.

You can read the position statements from the rest of the panel here.

Adobe Carousel is now available!

Cross-posted from my old Adobe Blog

I’ve been waiting a very long time to finally post this. Adobe Carousel is now available in the Mac App Store and the iTunes app store!

Getting this into your hands required a tremendous effort from a great team, and there is a lot more to come. More than just supporting more platforms like Android and Windows. This first version is just the tip of the iceberg. We wanted to put it into your hands, but it isn’t done. There is a lot more we want to do, but we want to hear from you. What do you need to make Adobe Carousel work even better for you? We want to know. We’re already hard at work on the next release, and hope to put it into your hands soon. Until then, download the client from the Mac App Store (requires Lion) or the iTunes app store. We have a 30 day free trial. Upload some photos, create a new Carousel to share with family or friends. Edit your photos and see how much Adobe imaging power we’ve been able to fit in your hands. The subscription pays for UNLIMITED storage for both you AND THE PEOPLE YOU SHARE WITH. That is a pretty serious deal.

Want to know more? This post on the Photoshop blog provides a lot of the official details and links.

This video summarizes what we are trying to create:

and this video tells you more about the team that I am so proud to be a part of (no actors, just really us):

Wondering why I sound so tired in that video? We shot it only days before we finished the product and we’d all been putting in long hours for weeks!

Having problems with the Adobe Connect add-in on OS X? Here is how to uninstall it.

I’m posting this here because it took me more than 20 minutes of googling to find the answer (and I’m an Adobe employee).

The Adobe Connect uses Flash and sometimes if you do an update to Flash on your system, Connect gets into a bad state. The way you’ll see this is that when the Add-in tries to launch it will get stuck with a small window that says “Loading Adobe Connect…” that never finishes.

The way to fix this problem is to uninstall Adobe Connect. Unfortunately, Adobe doesn’t make it easy for you to do that. there is no uninstaller and no information on the Adobe web site. Here is where the add-in is installed

~/Library/Preferences/Macromedia/Flash Player/www.macromedia.com/bin/connectaddin

Delete that directory and you have now uninstalled the add-in. Your connect sessions will now be hosted in your web browser until the next time you need add-in functionality, at which time you’ll be prompted to re-install it.

Hopefully this solves your problem and you found it faster than I did.

(tip of the hat to Aral Balkin who had to do this a few years ago too)

Speaking once again on Parallelism and Computer Science Education at the Intel Developer Forum

Cross-posted from my old Adobe Blog

As a hiring manager building teams working on modern computer software; I’ve often been disappointed in the lack of a proper foundation in parallel algorithms and architectures being taught in current Computer Science curricula. To that end, I’ve been working with a group called the Educational Alliance for a Parallel Future that aims to improve Computer Science curricula in this critical area. The EAPF is once again convening a panel of educators and industry representatives to talk about this important issue and once again I am delighted to participate.

The panel is entitled: Parallel Education Status Check – Which Programming Approaches Make the Cut for Parallelism in Undergraduate Education? Unlike previous iterations of this panel where we spoke in generalities, this time we’ll be diving a bit deeper into specific technologies that we think are good starting places for educators to introduce to their students.

Here is an excerpt of the abstract:
The industry and research communities face increasing workforce preparedness challenges in parallel (and distributed) computing, due to today’s ubiquitous multi-/many-core and cloud computing. Underlying the excitement over technical details of the newest platforms is one of the thorniest questions facing educators and practitioners — What languages, libraries, or programming models are best suited to make use of current and future innovations? This panel will confront this conundrum directly through discussions with technical managers and academics from different perspectives. The session is convened by the Educational Alliance for a Parallel Future (EAPF), an organization with wide-ranging industry/academia/research membership, including Intel, ACM, AMD, and other prominent technology corporations.

The panel will be presented on September 15th, 2011 at 10:15am as part of the Intel Developer Forum 2011 at the Moscone Center in San Francisco, California. There are free passes for interested educators. Register now for a free IDF day pass using promo code DCPACN1.

My specific take has always been that I am not as interested in grounding in a specific parallelism library or abstraction. The pace of change in this area has only increased over the last few years with the rise of multi-core, GPGPU, HPC and heterogenous computing. Techniques and libraries have arisen, gained adoption, and fallen out of favor one after another.

A developer who only understands how algorithms can be mapped to OpenMP-style libraries is not as useful once the team moves to Grand Central Dispatch or OpenCL. A grounding in traditional task-level parallelism as well as data-parallelism techniques is a starting point. It is important not only to understand what each of them are but the different types of problems that they are each applicable to.

Higher level abstractions like OpenMP are good for introductory courses. However, it is important to understand fully how high-level abstractions map to lower level implementations and even the hardware itself. Understanding the hardware your software runs on is critical to find the best performance for your code. It is also critical to understanding why one particular higher level library might work better than another for a particular task on specific hardware.

Once you understand things like hyperthreading, pThreads, locking mechanisms, and why OpenCL or CUDA maps really well to specific problems, but not to others, then you can return to using higher level abstractions that let you focus on your algorithm and not the details.

If I was a Dean of Computer Science somewhere, I’d look to creating a curriculum where parallel programming using higher-level abstractions was part of the introductory courses using something like C++11, OpenMP or TBB. Mid-level requirements would include some computer architecture instruction. Specifically, how computer architecture maps to the software that runs on top of it. This may also include some lower level instruction in things like pThreads, Race conditions, lock-free programming or even GPU or heterogenous programming techniques using OpenCL. In later courses focused more on software engineering, specific areas like graphics, or larger projects: I’d encourage the students to use whichever tools they found most appropriate to the tasks at hand. This might even include very high level proprietary abstractions like DirectCompute or C++AMP as long as the students could make the tradeoffs intelligently because of their understanding of the area from previous courses.

Given that the panel consists of representatives from Intel, AMD, Microsoft, Georgia Tech as well as myself, I’m expecting this to be a very spirited conversation. I hope to see you there.

More information:
Paul Steinberg’s blog post about the panel
Ben Gaster’s post

Comments

Cross-posted from my old Adobe blog.

I just approved and then changed my mind and un-approved a comment. The comment was a fair, if somewhat harsh, criticism of the Pixel Bender Toolkit. I originally decided to approve it because it was one person’s opinion and a response to something I wrote, and I don’t mind answering criticisms (even when they are worded less-than-delicately). However, I changed my mind because the writer decided not to include a valid name or e-mail to respond to.

So, that will be a rule I’m going to hold on to moving forward. If you want to post your honest opinion to something I write, I will always try to honor you and will post it; and respond. As long as your comments are:

  • honest
  • not advertising
  • not overt flame-bait
  • do not swear
  • are signed with your real name (or handle) AND e-mail address (which is not published, but lets me know that you are willing to put your name to something)

Hopefully, this should not strike anyone as draconian.

JJ, if you want to re-post with your real name and e-mail address, I will gladly approve your comment.

Speaking on the “Teach Parallel” show on IntelTV tomorrow

[crosspost from my adobe.com blog]

Tomorrow morning, I’ll be speaking with Paul Steinberg of Intel and Tom Murphy of Contra Costa college about the criticality of understanding parallel programming techniques for industry.

In my previous role on the Adobe Image Foundation, it was an obvious requirement for our hiring candidates. We were building tools for a insanely parallel problem, image and video processing. Now that I’m working on a new product, it would maybe seem that it would not be as important. In fact, our threading models are even more complicated than in my previous group. My expectations around threading knowledge for incoming candidates are just as high.

Even the most modest mobile hardware is going (or has gone) parallel. In addition, the expectations from a user perspective around interactivity with their applications is never higher. A laggy touch interface is death to an application (or a platform). Going to get coffee while your image renders on a desktop is a thing of the past. User’s expectations of the software we write is higher than ever and it is nearly impossible to get this interactivity without taking advantage of multi-threading on today’s multi-core processors.

The tools continue to improve, but the threading models continue to evolve. A fundamental understanding of multi-threading is critical for anyone moving into Software Engineering or looking to stay current in their field.

I always enjoy talking with Paul and Tom, and expect that we’ll have a lively conversation.

Tune in live on May 17, 10:00 AM PDT

Here is Paul’s post on the subject.

A Couple New Pixel Bender Links

Frequent contributor to the Pixel Bender forums, Royi Avital, has released a new set of After Effects and Photoshop plug-ins written with Pixel Bender under the name Flixel Plugins. The first three are now available on aescripts.com

Flixel Plugins on aescripts.com

ApexVJ is a really beautiful Flash-based music visualizer that uses Pixel Bender

Simo Santavirta, the creator, wrote an article on his blog about it.

Moving on…

After many happy and productive years working on Pixel Bender and the Adobe Image Foundation, I’ve decided to take on some new challenges. I’m still at Adobe, but I’m now building a new team and launching a brand new product in the Photoshop family. I can’t say too much yet, but I will have news soon. I’ll still be posting about Pixel Bender stuff here (I’m still a very enthusiastic user!), but for the newest news, you should now also watch the official Pixel Bender blog.