Skip to Main Content

Crisis horizon: why efficient coding is crucial

Posted by Andrew Shephard on 7th October 2014

CodingThe semiconductor business is running out of ways to make things smaller, faster and lower power. The reasons are fairly unfathomable to most sensible people, but like so much in life it all boils down to cash.

Investments in manufacturing processes that are battling the laws of physics but will deliver no financial return for four or five years are simply not attractive.  And when I say investments we’re talking billions of dollars; it’s so much money that one company cannot possibly afford the investment alone, so much money that even governments cannot contribute enough support to make a real difference.

To be fair it’s not quite at crisis point yet but those in the industry will confirm that critical advances have certainly slowed, and whatever happens now we will see a dip in the performance/price line in the next two years.  The cost of raw digital performance, by which I mean making things operate faster or smaller or with less energy, will increase because of this dip. Some markets can’t afford to pay more, so this could literally make a difference to which advances – trivial or life saving – certain people can have in their lives.

So I think coding, and more importantly efficient coding, has become a most valuable skill. We all need/expect/want things to get better, faster and more cleverly integrated – that’s progress. We don’t want to stop innovating, developing or pushing the envelope. And if the hardware curve is starting to slow because of economic pressure it’s the coders that have to step up and deliver the performance improvements instead.

Speaking as an ex-programmer most modern coding is really lazy, it relies on relatively huge processing power in the target device (which had become really really cheap but that cost is levelling out) and elaborate coding aids and tools which distance the coder from what’s really going on at the hardware level.

Having a wave of programmers, courtesy of the coolness of Raspberry Pi and similar devices, who truly ‘get’ coding at the lowest levels can only lead to new ideas, innovative new approaches and thinner more efficient solutions.

This means the target devices won’t need quite as much raw performance and the tools to develop our new products can be simpler, cheaper, and faster.

Crisis averted.

photo credit: Nat W

Andrew Shephard

Andrew’s engineering background and ‘fluff-free’ attitude combined with probably the broadest knowledge of technology installed in one PR brain ensures critical insight for Wildfire’s clients. He has driven campaigns for major forces in the semiconductor industry over 18 years including NEC Electronics, Sun Microelectronics and TSMC along with game-changing start-ups like Achronix and Nujira.

  • Alexander Perryman

    Just to date myself, do you remember the kind of stuff programmers managed to do with 48, 128 or 640k of memory back in the 80s? I’d love to see any coder today manage to eek that level of performance out of that little memory. They’d be stumped. There are now too many layers of abstraction in coding, put there so that even an idiot like m’self can code with relative ease.

  • @alexanderperryman:disqus there are whole competitions built around that exercise: http://www.java4k.com/index.php?action=home