Having problems with the Adobe Connect add-in on OS X? Here is how to uninstall it.

I’m posting this here because it took me more than 20 minutes of googling to find the answer (and I’m an Adobe employee).

The Adobe Connect uses Flash and sometimes if you do an update to Flash on your system, Connect gets into a bad state. The way you’ll see this is that when the Add-in tries to launch it will get stuck with a small window that says “Loading Adobe Connect…” that never finishes.

The way to fix this problem is to uninstall Adobe Connect. Unfortunately, Adobe doesn’t make it easy for you to do that. there is no uninstaller and no information on the Adobe web site. Here is where the add-in is installed

~/Library/Preferences/Macromedia/Flash Player/www.macromedia.com/bin/connectaddin

Delete that directory and you have now uninstalled the add-in. Your connect sessions will now be hosted in your web browser until the next time you need add-in functionality, at which time you’ll be prompted to re-install it.

Hopefully this solves your problem and you found it faster than I did.

(tip of the hat to Aral Balkin who had to do this a few years ago too)

Speaking once again on Parallelism and Computer Science Education at the Intel Developer Forum

Cross-posted from my old Adobe Blog

As a hiring manager building teams working on modern computer software; I’ve often been disappointed in the lack of a proper foundation in parallel algorithms and architectures being taught in current Computer Science curricula. To that end, I’ve been working with a group called the Educational Alliance for a Parallel Future that aims to improve Computer Science curricula in this critical area. The EAPF is once again convening a panel of educators and industry representatives to talk about this important issue and once again I am delighted to participate.

The panel is entitled: Parallel Education Status Check – Which Programming Approaches Make the Cut for Parallelism in Undergraduate Education? Unlike previous iterations of this panel where we spoke in generalities, this time we’ll be diving a bit deeper into specific technologies that we think are good starting places for educators to introduce to their students.

Here is an excerpt of the abstract:
The industry and research communities face increasing workforce preparedness challenges in parallel (and distributed) computing, due to today’s ubiquitous multi-/many-core and cloud computing. Underlying the excitement over technical details of the newest platforms is one of the thorniest questions facing educators and practitioners — What languages, libraries, or programming models are best suited to make use of current and future innovations? This panel will confront this conundrum directly through discussions with technical managers and academics from different perspectives. The session is convened by the Educational Alliance for a Parallel Future (EAPF), an organization with wide-ranging industry/academia/research membership, including Intel, ACM, AMD, and other prominent technology corporations.

The panel will be presented on September 15th, 2011 at 10:15am as part of the Intel Developer Forum 2011 at the Moscone Center in San Francisco, California. There are free passes for interested educators. Register now for a free IDF day pass using promo code DCPACN1.

My specific take has always been that I am not as interested in grounding in a specific parallelism library or abstraction. The pace of change in this area has only increased over the last few years with the rise of multi-core, GPGPU, HPC and heterogenous computing. Techniques and libraries have arisen, gained adoption, and fallen out of favor one after another.

A developer who only understands how algorithms can be mapped to OpenMP-style libraries is not as useful once the team moves to Grand Central Dispatch or OpenCL. A grounding in traditional task-level parallelism as well as data-parallelism techniques is a starting point. It is important not only to understand what each of them are but the different types of problems that they are each applicable to.

Higher level abstractions like OpenMP are good for introductory courses. However, it is important to understand fully how high-level abstractions map to lower level implementations and even the hardware itself. Understanding the hardware your software runs on is critical to find the best performance for your code. It is also critical to understanding why one particular higher level library might work better than another for a particular task on specific hardware.

Once you understand things like hyperthreading, pThreads, locking mechanisms, and why OpenCL or CUDA maps really well to specific problems, but not to others, then you can return to using higher level abstractions that let you focus on your algorithm and not the details.

If I was a Dean of Computer Science somewhere, I’d look to creating a curriculum where parallel programming using higher-level abstractions was part of the introductory courses using something like C++11, OpenMP or TBB. Mid-level requirements would include some computer architecture instruction. Specifically, how computer architecture maps to the software that runs on top of it. This may also include some lower level instruction in things like pThreads, Race conditions, lock-free programming or even GPU or heterogenous programming techniques using OpenCL. In later courses focused more on software engineering, specific areas like graphics, or larger projects: I’d encourage the students to use whichever tools they found most appropriate to the tasks at hand. This might even include very high level proprietary abstractions like DirectCompute or C++AMP as long as the students could make the tradeoffs intelligently because of their understanding of the area from previous courses.

Given that the panel consists of representatives from Intel, AMD, Microsoft, Georgia Tech as well as myself, I’m expecting this to be a very spirited conversation. I hope to see you there.

More information:
Paul Steinberg’s blog post about the panel
Ben Gaster’s post

Comments

Cross-posted from my old Adobe blog.

I just approved and then changed my mind and un-approved a comment. The comment was a fair, if somewhat harsh, criticism of the Pixel Bender Toolkit. I originally decided to approve it because it was one person’s opinion and a response to something I wrote, and I don’t mind answering criticisms (even when they are worded less-than-delicately). However, I changed my mind because the writer decided not to include a valid name or e-mail to respond to.

So, that will be a rule I’m going to hold on to moving forward. If you want to post your honest opinion to something I write, I will always try to honor you and will post it; and respond. As long as your comments are:

  • honest
  • not advertising
  • not overt flame-bait
  • do not swear
  • are signed with your real name (or handle) AND e-mail address (which is not published, but lets me know that you are willing to put your name to something)

Hopefully, this should not strike anyone as draconian.

JJ, if you want to re-post with your real name and e-mail address, I will gladly approve your comment.

Speaking on the “Teach Parallel” show on IntelTV tomorrow

[crosspost from my adobe.com blog]

Tomorrow morning, I’ll be speaking with Paul Steinberg of Intel and Tom Murphy of Contra Costa college about the criticality of understanding parallel programming techniques for industry.

In my previous role on the Adobe Image Foundation, it was an obvious requirement for our hiring candidates. We were building tools for a insanely parallel problem, image and video processing. Now that I’m working on a new product, it would maybe seem that it would not be as important. In fact, our threading models are even more complicated than in my previous group. My expectations around threading knowledge for incoming candidates are just as high.

Even the most modest mobile hardware is going (or has gone) parallel. In addition, the expectations from a user perspective around interactivity with their applications is never higher. A laggy touch interface is death to an application (or a platform). Going to get coffee while your image renders on a desktop is a thing of the past. User’s expectations of the software we write is higher than ever and it is nearly impossible to get this interactivity without taking advantage of multi-threading on today’s multi-core processors.

The tools continue to improve, but the threading models continue to evolve. A fundamental understanding of multi-threading is critical for anyone moving into Software Engineering or looking to stay current in their field.

I always enjoy talking with Paul and Tom, and expect that we’ll have a lively conversation.

Tune in live on May 17, 10:00 AM PDT

Here is Paul’s post on the subject.

Speaking at the AMD Fusion Developer Summit – June

If you are planning on attending the AMD Fusion Developer Summit in Bellevue, WA in June, come see me talk about Pixel Bender (probably for the last time!) with Bob Archer. Here is the description of the session:

Pixel Bender is a domain-specific image processing language created by the Adobe Image Foundation, and includes a runtime designed to work well across heterogeneous hardware, scaling efficiently for multiple cores. This runtime currently ships in a number of Adobe’s flagship products. Bob Archer, Technical Lead, and Kevin Goldsmith, Engineering Manager, will talk about the design of the language, compilers, and runtime. They will also discuss how the Adobe system can incorporate complimentary technologies like OpenCL and can scale to accommodate new hardware paradigms like the AMD Fusion processors.

Hope to see you there!

HPC on the (relative) cheap using public cloud providers

For the past several years, I’ve been working on leveraging high-performance computing techniques for high-throughput data intensive processing on desktop computers for stuff like image and video processing. Its been fun tracking what the multi-processing end of HPC has been doing, where the top 100 super-computer list has been very competitive and very active. Countries, IHVs and universities vie for who can generate more teraflops; spending millions and millions of dollars on the cooling plants alone for their dedicated data centers. These super computers exist to solve the BIG PROBLEMS of computing, and aren’t really useful beyond that.

At the same time, I’ve been following the public computing clouds like Amazon’s EC2, Google’s App Engine and Rack Space’s Public Cloud. These have been interesting for providing compute on the other end of the spectrum, occasional compute tasks, or higher average workloads with the occasional spike capability (like web apps). The public clouds are made up of thousands of servers and certainly rival or best the super computers in numbers of cores and raw compute power, but they exist for a different purpose.

This article in The Register really got me excited. Especially when I read this:

Stowe tells El Reg that during December last year, Cycle Computing set up increasingly large clusters on behalf of customers to start testing the limits. First, it did a 2,000-core cluster in early December, and then a 4,096-core cluster in late December. The 10,000-core cluster that Cycle Computing set up and ran for eight hours on behalf of Genentech would have ranked at 114 on the Top 500 computing list from last November (the most current ranking), so it was not exactly a toy even if the cluster was ephemeral.

The cost of running this world-class super computer?

Genentech loaded up its code and ran the job for eight hours at a total cost of $8,480, including EC2 compute and S3 storage capacity charges from Amazon and the fee for using the Cycle Computing tools as a service.

Real world HPC is now coming into price points where it is accessible to even small companies or research groups. This seems like a ripe opportunity for companies who can apply HPC-techniques to solve real problems for others, and for tools vendors who can make using these ephemeral clouds easier for companies who want to take advantage of them without having to build up high-end expertise in-house.

On Test-Driven Development

I was having a conversation with someone the other day about unit testing. OK, actually I was interviewing someone for a Quality Engineering position on my team. We were talking about the difference between white-box tests that quality engineers write and tests that developers write.

I suggested that good white-box testers test the functionality and the failure cases (the intent of the function) and developers test the code that they’ve written (the function as coded). This then lead me to a new revelation around test-first development methodologies (or possibly reminded me of something I had forgotten).

I have been a proponent of writing tests first, since I first started doing Extreme Programming and read Kent Beck’s original book, Extreme Programming Explained: Embrace Change while working at Bootleg Networks (thanks Carmine for making me do that, by the way). Although admittedly, like many developers, I haven’t always been that rigorous at following that rule.

What I like about writing the tests before the function is that it clarifies my thinking about what the function should do, it alerts me to the corner cases, it gives me reasons to consider if the function is doing too much, and it gives me a way to instantly know if the function works once it is written. Writing the tests first also makes sure that the tests are written at all. Once the function is coded, it sometimes gets tempting to move on to the next bit of coding work with the intention of filling in the tests later.

What I hadn’t considered about writing the tests before the code is that it puts me into a quality mindset without having any bias to the code as I’d written it. I’m divorced from my own blind-spots around my coding. This actually leads me to writing better tests because I have no assumptions about how the code should work or fail. I’m testing the functionality, not the code.

Maybe I’d thought about this before, but I hadn’t really considered that benefit recently until that moment. Now, when I start to get lazy about writing my unit tests before my implementation, I’ll have a better reason to keep up my discipline.

A Couple New Pixel Bender Links

Frequent contributor to the Pixel Bender forums, Royi Avital, has released a new set of After Effects and Photoshop plug-ins written with Pixel Bender under the name Flixel Plugins. The first three are now available on aescripts.com

Flixel Plugins on aescripts.com

ApexVJ is a really beautiful Flash-based music visualizer that uses Pixel Bender

Simo Santavirta, the creator, wrote an article on his blog about it.