October 21, 2011

Was P2P live streaming an academic bubble?

Or is the academic community just disconnected from the reality?

In brief, the motivation for peer-to-peer live streaming is that servers are unable to deliver a live video at large-scale. I know, it sounds crazy in a You-Tube world. In peer-to-peer system, clients should help the poor video provider broadcast its video, without much delay nor quality degradation. To have more fun, no server at all is authorized.

Believe it or not, but Google finds more than 50,000 scientific documents dealing with this issue or one of its variants. Today, only a handful of systems based on a peer-to-peer architecture are used, mostly to illegally broadcast sport events. As far as I know, these systems (released before the crazy scientific boom on the topic) do not implement one thousandth of the scientific proposals described in these 50,000 articles. It seems that the small teams of developers behind these programs haven't found the time to download/read/understand any of these articles.

Was this abundant scientific production useless? Probably not. First, scientists made some practical achievements. For example, the P2P-Next project has released under L-GPL tons of codes implementing state-of-the-art solutions, including the multiparty swift protocol. A protocol is also in the standardization process at IETF. Consequently, the next generation of peer-to-peer programs should be able to cut down TV media industry as it did for music industry. Second, these studies have produced interesting scientific results beyond the P2P streaming applications, for example the robustness of randomized processes for broadcasting data flows in networks. It reminds me the golden era of ad-hoc networks (2000-2005), where scientists had a lot of funs playing with graphs and information, even if only militaries have found these protocol useful. We do understand networks better now!

But, did it deserve 50,000 articles? Of course not. Under-the-spotlights start-ups (Joost) and publicly-funded pioneering companies (BBC) switched back to centralized architecture four years ago although they had a decisive technological advance. It looks like there is no bankable application out there. Maybe it was for the beauty of science, but whoever has funded these research works can only hope that randomized processes in networks will eventually find a way to improve human conditions in the world. Or maybe it was just a good idea to occupy people?

So, yes, P2P live streaming was a bubble. Here are three quick observations, which would deserve a more accurate analysis:
  • An academic bubble starts like a financial bubble. In the latter, no company can take the risk to not invest in an area if all competitors do. In an academic bubble, neither funding agency nor program committee can challenge an abrupt growth in the number of papers in a given area. Therefore scientists obtain quick fundings, publications, and citations, which fuel the bubble. However the academic bubble differs from the financial one because there is no critical damage when the number of papers abruptly drops. The bubble does not hurt when it explodes. So, nobody tries to understand what went wrong. In other words, this bubbling trend can only grow, and the next bubble (content-centric networking?) has good chances to be even bigger.
  • Tracking the next bubble is attractive. Scientists are rewarded on their impact on the community. In this context, the authors of seminal works in this area, for example Chord (nearly 9,000 citations despite distributed hash table has found few usefulness) or SplitStream (more than 1,000 citations for a system relying on a video encoding that has only been used by academics), are rock-stars. Anticipating the sheepish behavior of scientists has become a key academic skill.
  • Scientists are still incapable to focus their energy toward their right client, who were the aforementioned small teams of hackers in this case. This is yet another motivation for revamping the way scientific results are delivered in computer science. Giving free access to papers, releasing the code that has been used in the paper, participating in non-academic events or finding echoes in other communities are among the solutions. Not only to be meaningful, but also to prevent bubbles.
Just an idea: when the bubble is officially there, would it be possible to officially forbid the bullshit motivation paragraphs in the paper? I wish authors would admit that they just want to have fun developing a new model in a useless bubbling scenario.

5 comments:

  1. Love it. Before ad hoc networks, there was multicast. Also, as my networking professor said around 1995, "ATM to the lightbulb!"

    ReplyDelete
  2. I'm doing a research masters on distributed systems, and this blog post reflects exactly what I've been thinking all along about P2P x-y-z's: "Does anyone use this at all?"

    ReplyDelete
  3. Very good point. But I'm not sure that the bubbling does not hurt... It consumes a lot of resources for poor results and it may prevent real work to be done in not bubbling area.

    ReplyDelete
  4. P2P-Next, one of the projects you've mentioned, was started in 2008 (right after the announcement you've used) and the transition from research to deployment takes time. In 2020 we probably have >100Mbit in every household (well, no every but in most ...) and the three reasons for not using P2P are no longer valid.

    Thus, I don't know how you come to the conclusion "So, yes, P2P live streaming was a bubble". Maybe it is simply not the right time for a large scale deployment but in 10 years.

    ReplyDelete
  5. Interesting point! just like a famous motto from Box, "essentially, all models are wrong, but some are useful". It may be true for any so-called acadamic bubble.

    ReplyDelete

Note: Only a member of this blog may post a comment.