Significant new inventions in computing since 1980

自闭症网瘾萝莉.ら 提交于 2019-12-17 04:36:34

问题


This question arose from comments about different kinds of progress in computing over the last 50 years or so.

I was asked by some of the other participants to raise it as a question to the whole forum.

The basic idea here is not to bash the current state of things but to try to understand something about the progress of coming up with fundamental new ideas and principles.

I claim that we need really new ideas in most areas of computing, and I would like to know of any important and powerful ones that have been done recently. If we can't really find them, then we should ask "Why?" and "What should we be doing?"


回答1:


The Internet itself pre-dates 1980, but the World Wide Web ("distributed hypertext via simple mechanisms") as proposed and implemented by Tim Berners-Lee started in 1989/90.

While the idea of hypertext had existed before (Nelson’s Xanadu had tried to implement a distributed scheme), the WWW was a new approach for implementing a distributed hypertext system. Berners-Lee combined a simple client-server protocol, markup language, and addressing scheme in a way that was powerful and easy to implement.

I think most innovations are created in re-combining existing pieces in an original way. Each of the pieces of the WWW had existed in some form before, but the combination was obvious only in hindsight.

And I know for sure that you are using it right now.




回答2:


Free Software Foundation (Established 1985)

Even if you aren't a wholehearted supporter of their philosophy, the ideas that they have been pushing, of free software, open-source has had an amazing influence on the software industry and content in general (e.g. Wikipedia).




回答3:


I think it's fair to say that in 1980, if you were using a computer, you were either getting paid for it or you were a geek... so what's changed?

  • Printers and consumer-level desktop publishing. Meant you didn't need a printing press to make high-volume, high-quality printed material. That was big - of course, nowadays we completely take it for granted, and mostly we don't even bother with the printing part because everyone's online anyway.

  • Colour. Seriously. Colour screens made a huge difference to non-geeks' perception of games & applications. Suddenly games seemed less like hard work and more like watching TV, which opened the doors for Sega, Nintendo, Atari et al to bring consumer gaming into the home.

  • Media compression (MP3s and video files). And a whole bunch of things - like TiVO and iPods - that we don't really think of as computers any more because they're so ubiquitous and so user-friendly. But they are.

The common thread here, I think, is stuff that was once impossible (making printed documents; reproducing colour images accurately; sending messages around the world in real time; distributing audio and video material), and was then expensive because of the equipment and logistics involved, and is now consumer-level. So - what are big corporates doing now that used to be impossible but might be cool if we can work out how to do it small & cheap?

Anything that still involves physical transportation is interesting to look at. Video conferencing hasn't replaced real meetings (yet) - but with the right technology, it still might. Some recreational travel could be eliminated by a full-sensory immersive environment - home cinema is a trivial example; another is the "virtual golf course" in an office building in Soho, where you play 18 holes of real golf on a simulated course.

For me, though, the next really big thing is going to be fabrication. Making things. Spoons and guitars and chairs and clothing and cars and tiles and stuff. Things that still rely on a manufacturing and distribution infrastructure. I don't have to go to a store to buy a movie or an album any more - how long until I don't have to go to the store for clothing and kitchenware?

Sure, there are interesting developments going on with OLED displays and GPS and mobile broadband and IoC containers and scripting and "the cloud" - but it's all still just new-fangled ways of putting pictures on a screen. I can print my own photos and write my own web pages, but I want to be able to fabricate a linen basket that fits exactly into that nook beside my desk, and a mounting bracket for sticking my guitar FX unit to my desk, and something for clipping my cellphone to my bike handlebars.

Not programming related? No... but in 1980, neither was sound production. Or video distribution. Or sending messages to your relatives in Zambia. Think big, people... :)




回答4:


Package management and distributed revision control.

These patterns in the way software is developed and distributed are quite recent, and are still just beginning to make an impact.

Ian Murdock has called package management "the single biggest advancement Linux has brought to the industry". Well, he would, but he has a point. The way software is installed has changed significantly since 1980, but most computer users still haven't experienced this change.

Joel and Jeff have been talking about revision control (or version control, or source control) with Eric Sink in Podcast #36. It seems most developers haven't yet caught up with centralized systems, and DVCS is widely seen as mysterious and unnecessary.

From the Podcast 36 transcript:

0:06:37

Atwood: ... If you assume -- and this is a big assumption -- that most developers have kinda sorta mastered fundamental source control -- which I find not to be true, frankly...

Spolsky: No. Most of them, even if they have, it's the check-in, check-out that they understand, but branching and merging -- that confuses the heck out of them.




回答5:


BitTorrent. It completely turns what previously seemed like an obviously immutable rule on its head - the time it takes for a single person to download a file over the Internet grows in proportion to the number of people downloading it. It also addresses the flaws of previous peer-to-peer solutions, particularly around 'leeching', in a way that is organic to the solution itself.

BitTorrent elegantly turns what is normally a disadvantage - many users trying to download a single file simultaneously - into an advantage, distributing the file geographically as a natural part of the download process. Its strategy for optimizing the use of bandwidth between two peers discourages leeching as a side-effect - it is in the best interest of all participants to enforce throttling.

It is one of those ideas which, once someone else invents it, seems simple, if not obvious.




回答6:


Damas-Milner type inference (often called Hindley-Milner type inference) was published in 1983 and has been the basis of every sophisticated static type system since. It was a genuinely new idea in programming languages (admitted based on ideas published in the 1970s, but not made practical until after 1980). In terms of importance I put it up with Self and the techniques used to implement Self; in terms of influence it has no peer. (The rest of the OO world is still doing variations on Smalltalk or Simula.)

Variations on type inference are still playing out; the variation I would single out the most is Wadler and Blott's type class mechanism for resolving overloading, which was later discovered to offer very powerful mechanisms for programming at the type level. The end to this story is still being written.




回答7:


Here's a plug for Google map-reduce, not just for itself, but as a proxy for Google's achievement of running fast, reliable services on top of farms of unreliable, commodity machines. Definitely an important invention and totally different from the big-iron mainframe approaches to heavyweight computation that ruled the roost in 1980.




回答8:


Tagging, the way information is categorized. Yes, the little boxes of text under each question.

It is amazing that it took about 30 years to invent tagging. We used lists and tables of contents; we used things which are optimized for printed books.

However 30 years is much shorter than the time people needed to realize that printed books can be in smaller format. People can keep books in hands.

I think that the tagging concept is underestimated among core CS guys. All research is focused on natural language processing (top-down approach). But tagging is the first language in which computers and people can both understand well. It is a bottom-up approach that makes computers use natural languages.




回答9:


I think we are looking at this the wrong way and drawing the wrong conclusions. If I get this right, the cycle goes:

Idea -> first implementation -> minority adoption -> critical mass -> commodity product

From the very first idea to the commodity, you often have centuries, assuming the idea ever makes it to that stage. Da Vinci may have drawn some kind of helicopter in 1493 but it took about 400 years to get an actual machine capable of lifting itself off the ground.

From William Bourne's first description of a submarine in 1580 to the first implementation in 1800, you have 220 years and current submarines are still at an infancy stage: we almost know nothing of underwater traveling (with 2/3rdof the planet under sea, think of the potential real estate ;).

And there is no telling that there wasn't earlier, much earlier ideas that we just never heard of. Based on some legends, it looks like Alexander the Great used some kind of diving bell in 332 BC (which is the basic idea of a submarine: a device to carry people and air supply below the sea). Counting that, we are looking at 2000 years from idea (even with a basic prototype) to product.

What I am saying is that looking today for implementations, let alone products, that were not even ideas prior to 1980 is ... I betcha the "quick sort" algorithm was used by some no name file clerk in ancient China. So what?

There were networked computers 40 years ago, sure, but that didn't compare with today's Internet. The basic idea/technology was there, but regardless you couldn't play a game of Warcraft online.

I claim that we need really new ideas in most areas of computing, and I would like to know of any important and powerful ones that have been done recently. If we can't really find them, then we should ask "Why?" and "What should we be doing?"

Historically, we have never been able to "find them" that close from the idea, that fast. I think the cycle is getting faster, but computing is still darn young.

Currently, I am trying to figure out how to make an hologram (the Star Wars kind, without any physical support). I think I know how to make it work. I haven't even gathered the tools, materials, funding and yet even if I was to succeed to any degree, the actual idea would already be several decades old, at the very least and related implementations/technologies have been used for just as long.

As soon as you start listing actual products, you can be pretty sure that concepts and first implementations existed a while ago. Doesn't matter.

You could argue with some reason that nothing is new, ever, or that everything is new, always. That's philosophy and both viewpoints can be defended.

From a practical viewpoint, truth lies somewhere in between. Truth is not a binary concept, boolean logic be damned.

The Chinese may have come up with the printing press a while back, but it's only been about 10 years that most people can print decent color photos at home for a reasonable price.

Invention is nowhere and everywhere, depending on your criteria and frame of reference.




回答10:


Google's Page Rank algorithm. While it could be seen as just a refinement of web crawling search engines, I would point out that they too were developed post-1980.




回答11:


DNS, 1983, and dependent advances like email host resolution via MX records instead of bang-paths. *shudder*

Zeroconf working on top of DNS, 2000. I plug my printer into the network and my laptop sees it. I start a web server on the network and my browser sees it. (Assuming they broadcast their availability.)

NTP (1985) based on Marzullo's algorithm (1984). Accurate time over jittery networks.

The mouse scroll wheel, 1995. Using mice without it feels so primitive. And no, it's not something that Engelbart's team thought of and forgot to mention. At least not when I asked someone who was on the team at the time. (It was at some Engelbart event in 1998 or so. I got to handle one of the first mice.)

Unicode, 1987, and its dependent advances for different types of encoding, normalization, bidirectional text, etc.

Yes, it's pretty common for people to use all 5 of these every day.

Are these "really new ideas?" After all, there were mice, there were character encodings, there was network timekeeping. Tell me how I can distinguish between "new" and "really new" and I'll answer that one for you. My intuition says that these are new enough.

In smaller domains there are easily more recent advances. In bioinformatics, for example, Smith-Waterman (1981) and more especially BLAST (1990) effectively make the field possible. But it sounds like you're asking for ideas which are very broad across the entire field of computing, and the low-hanging fruit gets picked first. Thus is it always with a new field.




回答12:


What about digital cameras?

According to Wikipedia, the first true digital camera appeared in 1988, with mass market digital cameras becoming affordable in the late 1990s.




回答13:


Modern shading languages and the prevalence of modern GPUs.

The GPU is also a low cost parallel supercomputer with tools like CUDA and OpenCL for blazing fast high level parallel code. Thank you to all those gamers out there driving down the prices of these increasingly impressive hardware marvels. In the next five years I hope every new computer sold (and iPhones too) will have the ability to run massively parallel code as a basic assumption, much like 24 bit color or 32 bit protected mode.




回答14:


JIT compilation was invented in the late 1980s.




回答15:


To address the two questions about "Why the death of new ideas", and "what to do about it"?

I suspect a lot of the lack of progress is due to the massive influx of capital and entrenched wealth in the industry. Sounds counterintuitive, but I think it's become conventional wisdom that any new idea gets one shot; if it doesn't make it at the first try, it can't come back. It gets bought by someone with entrenched interests, or just FAILs, and the energy is gone. A couple examples are tablet computers, and integrated office software. The Newton and several others had real potential, but ended up (through competitive attrition and bad judgment) squandering their birthrights, killing whole categories. (I was especially fond of Ashton Tate's Framework; but I'm still stuck with Word and Excel).

What to do? The first thing that comes to mind is Wm. Shakespeare's advice: "Let's kill all the lawyers." But now they're too well armed, I'm afraid. I actually think the best alternative is to find an Open Source initiative of some kind. They seem to maintain accessibility and incremental improvement better than the alternatives. But the industry has gotten big enough so that some kind of organic collaborative mechanism is necessary to get traction.

I also think that there's a dynamic that says that the entrenched interests (especially platforms) require a substantial amount of change - churn - to justify continuing revenue streams; and this absorbs a lot of creative energy that could have been spent in better ways. Look how much time we spend treading water with the newest iteration from Microsoft or Sun or Linux or Firefox, making changes to systems that for the most part work fine already. It's not because they are evil, it's just built into the industry. There's no such thing as Stable Equilibrium; all the feedback mechanisms are positive, favoring change over stability. (Did you ever see a feature withdrawn, or a change retracted?)

The other clue that has been discussed on SO is the Skunkworks Syndrome (ref: Geoffrey Moore): real innovation in large organizations almost always (90%+) shows up in unauthorized projects that emerge spontaneously, fueled exclusively by individual or small group initiative (and more often than not opposed by formal management hierarchies). So: Question Authority, Buck the System.




回答16:


One thing that astounds me is the humble spreadsheet. Non-programmer folk build wild and wonderful solutions to real world problems with a simple grid of formula. Replicating their efforts in desktop application often takes 10 to 100 times longer than it took to write the spreadsheet and the resulting application is often harder to use and full of bugs!

I believe the key to the success of the spreadsheet is automatic dependency analysis. If the user of the spreadsheet was forced to use the observer pattern, they'd have no chance of getting it right.

So, the big advance is automatic dependency analysis. Now why hasn't any modern platform (Java, .Net, Web Services) built this into the core of the system? Especially in a day and age of scaling through parallelization - a graph of dependencies leads to parallel recomputation trivially.

Edit: Dang - just checked. VisiCalc was released in 1979 - let's pretend it's a post-1980 invention.

Edit2: Seems that the spreadsheet is already noted by Alan anyway - if the question that bought him to this forum is correct!




回答17:


Software:

  • Virtualization and emulation

  • P2P data transfers

  • community-driven projects like Wikipedia, SETI@home ...

  • web crawling and web search engines, i.e. indexing information that is spread out all over the world

Hardware:

  • the modular PC

  • E-paper




回答18:


The rediscovery of the monad by functional programming researchers. The monad was instrumental in allowing a pure, lazy language (Haskell) to become a practical tool; it has also influenced the design of combinator libraries (monadic parser combinators have even found their way into Python).

Moggi's "A category-theoretic account of program modules" (1989) is generally credited with bringing monads into view for effectful computation; Wadler's work (for example, "Imperative functional programming" (1993)) presented monads as practical tool.




回答19:


Shrinkwrap software

Before 1980, software was mostly specially written. If you ran a business, and wanted to computerize, you'd typically get a computer and compiler and database, and get your own stuff written. Business software was typically written to adapt to business practices. This is not to say there was no canned software (I worked with SPSS before 1980), but it wasn't the norm, and what I saw tended to be infrastructure and research software.

Nowadays, you can go to a computer store and find, on the shelf, everything you need to run a small business. It isn't designed to fit seamlessly into whatever practices you used to have, but it will work well once you learn to work more or less according to its workflow. Large businesses are a lot closer to shrinkwrap than they used to be, with things like SAP and PeopleSoft.

It isn't a clean break, but after 1980 there was a very definite shift from expensive custom software to low-cost off-the-shelf software, and flexibility shifted from software to business procedures.

It also affected the economics of software. Custom software solutions can be profitable, but it doesn't scale. You can only charge one client so much, and you can't sell the same thing to multiple clients. With shrinkwrap software, you can sell lots and lots of the same thing, amortizing development costs over a very large sales base. (You do have to provide support, but that scales. Just consider it a marginal cost of selling the software.)

Theoretically, where there are big winners from a change, there are going to be losers. So far, the business of software has kept expanding, so that as areas become commoditized other areas open up. This is likely to come to an end sometime, and moderately talented developers will find themselves in a real crunch, unable to work for the big boys and crowded out of the market. (This presumably happens for other fields; I suspect the demand for accountants is much smaller than it would be without QuickBooks and the like.)




回答20:


Outside of hardware innovations, I tend to find that there is little or nothing new under the sun. Most of the really big ideas date back to people like von Neumann and Alan Turing.

A lot of things that are labelled 'technology' these days are really just a program or library somebody wrote, or a retread of an old idea with a new metaphor, acronym, or brand name.




回答21:


Computer Worms were researched in the early eighties of the last century in the Xerox Palo Alto Research Center.

From John Shoch's and Jon Hupp's The "Worm" Programs - Early Experience with a Distributed Computation" (Communications of the ACM, March 1982 Volume 25 Number 3, pp.172-180, march 1982):

In The Shockwave Rider, J. Brunner developed the notion of an omnipotent "tapeworm" program running loose through a network of computers - an idea which may seem rather disturbing, but which is also quite beyond our current capabilities. The basic model, however, remains a very provocative one: a program or a computation that can move from machine to machine, harnessing resources as needed, and replicating itself when necessary.

In a similar vein, we once described a computational model based upon the classic science-fiction film, The Blob: a program that started out running in one machine, but as its appetite for computing cycles grew, it could reach out, find unused machines, and grow to encompass those resources. In the middle of the night, such a program could mobilize hundreds of machines in one building; in the morning, as users reclaimed their machines, the "blob" would have to retreat in an orderly manner, gathering up the intermediate results of its computation. Holed up in one or two machines during the day, the program could emerge again later as resources became available, again expanding the computation. (This affinity for nighttime exploration led one researcher to describe these as "vampire programs.")

Quoting Alan Kay: "The best way to predict the future is to invent it."




回答22:


Better user interfaces.

Today’s user interfaces still suck. And I don't mean in small ways but in large, fundamental ways. I can't help but to notice that even the best programs still have interfaces that are either extremely complex or that require a lot of abstract thinking in other ways, and that just don't approach the ease of conventional, non-software tools.

Granted, this is due to the fact that software allows to do so much more than conventional tools. That's no reason to accept the status quo though. Additionally, most software is simply not well done.

In general, applications still lack a certain “just works” feeling are too much oriented by what can be done, rather than what should be done. One point that has been raised time and again, and that is still not solved, is the point of saving. Applications crash, destroying hours of work. I have the habit of pressing Ctrl+S every few seconds (of course, this no longer works in web applications). Why do I have to do this? It's mind-numbingly stupid. This is clearly a task for automation. Of course, the application also has to save a diff for every modification I make (basically an infinite undo list) in case I make an error.

Solving this probem isn't even actually hard. It would just be hard to implement it in every application since there is no good API to do this. Programming tools and libraries have to improve significantly before allowing an effortless implementation of such effords across all platforms and programs, for all file formats with arbitrary backup storage and no required user interaction. But it is a necessary step before we finally start writing “good” applications instead of merely adequate ones.

I believe that Apple currently approximates the “just works” feeling best in some regards. Take for example their newest version of iPhoto which features a face recognition that automatically groups photos by people appearing in them. That is a classical task that the user does not want to do manually and doesn't understand why the computer doesn't do it automatically. And even iPhoto is still a very long way from a good UI, since said feature still requires ultimate confirmation by the user (for each photo!), since the face recognition engine isn't perfect.




回答23:


HTM systems (Hiearchical Temporal Memory).

A new approach to Artifical Intelligence, initiated by Jeff Hawkins through the book "On Intelligence".

Now active as a company called Numenta where these ideas are put to the test through development of "true" AI, with an invitation to the community to participate by using the system through SDKs.

It's more about building machine intelligence from the ground up, rather than trying to emulate human reasoning.




回答24:


The use of Physics in Human Computer interaction to provide an alternative, understandable metaphor. This combined with gestures and haptics will likely result in a replacment for the current common GUI metaphor invented in the 70's and in common use since the mid to late 80's.

The computing power wasn't present in 1980 to make that possible. I believe Games likely led the way here. An example can easily be seen in the interaction of list scrolling in the iPod Touch/iPhone. The interaction mechanism relies on the intuition of how momentum and friction work in the real world to provide a simple way to scroll a list of items, and the usability relies on the physical gesture that cause the scroll.




回答25:


I believe Unit Testing, TDD and Continuous Integration are significant inventions after 1980.




回答26:


Mobile phones.

While the first "wireless phone" patent was in 1908, and they were cooking for a long time (0G in 1945, 1G launched in Japan in 1979), modern 2G digital cell phones didn't appear until 1991. SMS didn't exist until 1993, and Internet access appeared in 1999.




回答27:


I started programming Jan 2nd 1980. I've tried to think about significant new inventions over my career. I struggle to think of any. Most of what I consider significant were actually invented prior to 1980 but then weren't widely adopted or improved until after.

  1. Graphical User Interface.
  2. Fast processing.
  3. Large memory (I paid $200.00 for 16k in 1980).
  4. Small sizes - cell phones, pocket pc's, iPhones, Netbooks.
  5. Large storage capacities. (I've gone from carrying a large 90k floppy to an 8 gig usb thumb drive.
  6. Multiple processors. (Almost all my computers have more than one now, software struggles to keep them busy).
  7. Standard interfaces (like USB) to easily attach hardware peripherals.
  8. Multiple Touch displays.
  9. Network connectivity - leading to the mid 90's internet explosion.
  10. IDE's with Intellisense and incremental compiling.

While the hardware has improved tremendously the software industry has struggled to keep up. We are light years ahead of 1980, but most improvements have been refinements rather than inventions. Since 1980 we have been too busy applying what the advancements let us do rather than inventing. By themselves most of these incremental inventions are not important or powerful, but when you look back over the last 29 years they are quite powerful.

We probably need to embrace the incremental improvements and steer them. I believe that truly original ideas will probably come from people with little exposure to computers and they are becoming harder to find.




回答28:


Nothing.

I think it's because people have changed their attitudes. People used to believe that if they would just find that "big idea", then they would strike it rich. Today, people believe that it is the execution and not the discovery that pays out the most. You have mantras such as "ideas are a dime a dozen" and "the second mouse gets the cheese". So people are focused on exploiting existing ideas rather than coming up with new ones.




回答29:


Open Source community development.




回答30:


The iPad (released April 2010): surely such a concept is absolutely revolutionary!

alt text http://www.ubergizmo.com/photos/2010/1/apple-ipad//apple-ipad-05.JPG

No way Alan Kay saw that coming from the 1970's!
Imagine such a "personal, portable information manipulator"...


...

Wait? What!? The Dynabook you say?

Thought out by Alan Kay as early as 1968, and described in great details in this 1972 paper??

NOOOoooooooo....

Oh well... never mind.



来源:https://stackoverflow.com/questions/432922/significant-new-inventions-in-computing-since-1980

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!