Source: Visual Capitalist
A Supercomputer In Your Pocket
September 2, 2015 7:00pm by Barry Ritholtz
This content, which contains security-related opinions and/or information, is provided for informational purposes only and should not be relied upon in any manner as professional advice, or an endorsement of any practices, products or services. There can be no guarantees or assurances that the views expressed here will be applicable for any particular facts or circumstances, and should not be relied upon in any manner. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment. The commentary in this “post” (including any related blog, podcasts, videos, and social media) reflects the personal opinions, viewpoints, and analyses of the Ritholtz Wealth Management employees providing such comments, and should not be regarded the views of Ritholtz Wealth Management LLC. or its respective affiliates or as a description of advisory services provided by Ritholtz Wealth Management or performance returns of any Ritholtz Wealth Management Investments client. References to any securities or digital assets, or performance data, are for illustrative purposes only and do not constitute an investment recommendation or offer to provide investment advisory services. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. The Compound Media, Inc., an affiliate of Ritholtz Wealth Management, receives payment from various entities for advertisements in affiliated podcasts, blogs and emails. Inclusion of such advertisements does not constitute or imply endorsement, sponsorship or recommendation thereof, or any affiliation therewith, by the Content Creator or by Ritholtz Wealth Management or any of its employees. Investments in securities involve the risk of loss. For additional advertisement disclaimers see here: https://www.ritholtzwealth.com/advertising-disclaimers Please see disclosures here: https://ritholtzwealth.com/blog-disclosures/
What's been said:
Discussions found on the web:Posted Under
Previous Post
Bloomberg TV: Evaluating a Volatile MarketNext Post
Jim Smith Speaks
Cool stuff Barry, Thanks.
This is the sort of stuff that makes those of us who know a few things about computer architecture roll our eyes.
Sure, you can obtain impressive floating-point CPU benchmarks on a modern smart phone. But that’s not what makes a supercomputer.
Supercomputers had (and have) not only bleeding-edge floating point performance, they had and have IO channels that would move rivers of data in and out of the CPU(s). A smartphone has an IO pathway that’s the size of a garden hose, relatively speaking. And a good rule of thumb for whether a platform is a supercomputer or can be compared to one is this: Is there a competent FORTRAN compiler for it?
On the other hand, researchers _have_ constructed supercomputers by basically daisy-chaining consumer-grade PCs, and can do fairly sophisticated modeling with banks of videogame GPUs. Sure, the comparisons on this chart gloss over many technical differences between the items being compared–you can’t easily substitute one for the other in many cases. And the chart pretty much just states the obvious–we have a lot more computer power available to us than in the past, enough that we can carry chunks of it around in our pocket and use it for watching cat videos whenever we like.
Yes, several of the modern supercomputer architectures are massively parallel arrays or “cubes” of Intel or AMD multi-core CPU’s. But… again, the IO interfaces between these chips+memory stacks aren’t your run-of-the-mill interfaces. The ability to push huge streams of data in/out of the computational elements of a supercomputer are what make it a supercomputer. You can’t take on “huge data” problems without being able to get the “huge data” in and out of your wickedly fast CPU.
GPU’s are an “on your desk” are an example of a processor that is optimized for a particular problem space – ie, graphics algorithm processing. These chips, in their problem domain, can make a better claim to being a “supercomputer on your desk” than the general purpose CPU processing your text because they also have the interfaces necessary to move huge amounts of data through them at very tidy speeds.
My point was that the ability to run a floating-point benchmark on one’s cell phone doesn’t make it a supercomputer, any more than putting a blower and nitro induction onto a VW Beetle turns it into a drag racing car. Just having an engine that puts out 2,000 BHP at the crankshaft won’t get you 1/4 mile checkered flags.
and most PC or MAC computers werent all that fast, they just are what consumers could or would buy. so trying to compare them to a specialized computer like a cray or even mainframes of the same vintage, you find they dont come close. nor do some the more recent hardware. they each are solving different problems.
and while there have been experiments to create super computers from a gaggle of PCs only works if you have a problem with a small dataset. since you will loose a lot of time just trying to coordinate the systems
I am just a hobbyist when it comes to computers. From what I have read Python has replaced Fortran as the language of choice for mathematics. It is not compiled but is intermediate between a compiled and interpreted language. I studies Fortran forty years ago in college and found it a tedious language. Maybe from what you have said Fortran is not completely obsolete yet. Compiled is faster.
The dominance of Fortran in scientific computing
http://arstechnica.com/science/2014/05/scientific-computings-future-can-any-coding-language-top-a-1950s-behemoth/
And why:
http://arstechnica.com/information-technology/2014/05/ask-ars-why-are-some-programming-languages-faster-than-others/
When I was in school, and computers were starting to pass 4.7 MHz, one of the teachers did a quick back-of-the envelope calculation about the physical limitation of speed. He assumed away all manufacturing limitations, and asserted that the maximum possible speed was about 3000 MHz (none of us had really pondered the term “GHz”).
He believed that anything past 3000 MHz, the designers would no longer be able to use semiconductor-based integrated circuits. His theory was that the minimum size for a transistor conducting line was 0.5 nm (about 5 atoms), and the minimum space between lines was twice that. Based on the speed of electrictity through silicon, he decided that 3000 Mhz was the limit of computing speed. (This is an oversimplification, because I don’t understand or remember enough.)
Although computers have broken that speed limit, using a variety of tricks, I have not heard of anyone who thinks conducting lines can be smaller than 0.5 nm. The distance between lines has been narrowed to a dozen nm (some say) without loss of reliability, so there seems to be room for a at least one order of new manufacturing improvements.
But after this next set of upgrades, computing power will (maybe) have hit a wall. There are plenty of new tricks, of course, but we might be fresh out of exponential improvements. This is a comforting thought for some people: Although I could be replaced with a computer, it would have to be a very expensive one.
And while we are at it, the massive reality simulations (theoretically indistinguishable from the real world) might be simply impossible. The bad news: If you have a rent payment due, you should go ahead and pay it.
I think that we are only one or two laptop refreshes away from getting issued either a Microsoft Surface type of tablet or simply a phone that will dock into a monitor and keyboard at work with a rollup screen and folding keyboard for work while travelling.
When I started working in the early 80s, secretaries had standalone word processors transcribing what I wrote by hand while I did engineering modeling on a computer terminal hooked up to a minicomputer in another city.
By the late 80s, we were doing most of our own processing on desktop PCs and we could do decent modeling on them as well.
By the mid-90s, laptops were being issued and many of our analyses that used to take 2 hrs of a minicomputer time, now took 2 min of a laptop time. We also started having e-mail with clients etc. and the Internet started to become relevant.
Now I often don’t even turn on my laptop at home, because I can keep tabs on things through a smartphone. I only turn on my laptop if I need to see things in detail, such as large documents etc. or do editing/writing. The biggest constriction is now us, namely our ability to resolve details with visual and tactile interfaces, which is why keyboards and monitors will likely be around for a while until we can just type on a table top and hook up something to our glasses for a heads-up view in place of keyboards and monitors..
I am surprised that we don’t yet have a roll-up full size keyboard with wireless connections (and about the size of a cigar when rolled up/away). That would make phones so much more useful as computers.
Here’s a long, but readable article along these lines about advancement in AI. Future could be either super bright or super scary…we just don’t know which.
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html