March 24, 2015

Ten years as an academic scientist: preamble of my HdR

Here is the preamble of my HdR, which I will defend on April the 7th 2015 at Rennes.

I defended my PhD thesis ten years ago. At that time, my research domains included peer-to-peer systems, mobile ad-hoc networks and large-scale virtual worlds. Today, these topics hardly get any attention from the academic world. Although most papers published in the early 2000s advocated that centralized systems would never scale, today's most popular services, which are used by billions of users, rely on a centralized architecture powered by data-centers. In the meantime, the open virtual worlds based on 3D graphical representation (e.g. Second Life) fell short of users while social networks based on static text-based web pages (e.g. Twitter and Facebook) have exploded. I do not want to blame myself for having worked in areas that have not proved to be as critical as they were supposed to be. Instead, I would like to emphasize that I work in an ever-changing area, which is highly sensitive to the development of new technologies (e.g. big data middleware), of new hardware (e.g. smartphone), and of new social trends (e.g. user-generated content).

I envy the scientists who are able to precisely describe a multi-year research plan, and to stick to it. I am not one of them. But I am not ashamed to admit that my research activity is mostly driven by short-term intuition and opportunities and that the process of academic funding directly impacts my work. Indeed, despite all of the above, I have built a research work, which I retrospectively find consistent. And more importantly, I have been relatively successful in advising PhD students and managing post-docs, all of them having become better scientists to some extents.

In very short, I have developed during the past ten years a more solid expertise in (i) theoretical aspects of optimization algorithms, (ii) multimedia streaming, and (iii) Internet architecture. I have applied these triple expertise to a specific set of applications: massive multimedia interactive services. I provide in this manuscript an overview of the activities that have been developed under my lead since 2006. It is a subset of selected studies, which are in my opinion the most representative of my core activity.

I hope you will have as much fun reading this document as I had writing it.

November 7, 2014

A Dataset for Cloud Live Rate-Adaptive Video

There is an audience for non-professional video "broadcasters", like gamers, online courses teachers and witnesses of public events. To meet this demand, live streaming service providers such as ustream, livestream, twitch or dailymotion have to find a solution for the delivery of thousands of good quality live streams to millions of viewers who consume video on a wide range of devices (from smartphone to HDTV). Yet, in current live streaming services, the video is encoded on the computer of the broadcaster and streamed to the data-center of the service provider, which in most cases chooses to simply forward the video it get from the broadcaster. The problem is that many viewers cannot properly watch the streams due to mismatches between encoding video parameters (i.e. video rate and resolution) and features of viewers’ connections and devices (i.e. connection bandwidth and device display).

To address this issue, adaptive streaming working along with cloud computing could be the answer. Whereas adaptive streaming allows managing the diversity of end-viewers requirements by encoding several video representations at different rates and resolutions, cloud computing provides the CPU resources to live transcode all these alternate representations from the broadcaster-prepared raw video.

It is well known that the QoE of an end-viewer watching a stream depends on the encoded video and the parameters values used in the transcoding. But, in this new scenario in the cloud, we also need to consider the transcoding CPU requirements. In the “cloud video” era, the selection of video encoding parameters should take into account not only the client (for the QoE), but also the data-center (for the allocated CPU). To set the video transcoding parameters, the cloud video service provider should know the relations among transcoding parameters, CPU resources and end-viewers QoE, ideally for any kind of video encoded on the broadcaster side.

We would like to announce the publication of a dataset containing CPU and QoE measurements corresponding to an extensive battery of transcoding operations in with the purpose of contributing to research in this topic. Most of the credits for this work (and so this post) have to be given to Ramon Aparicio-Pardo.

To elaborate the dataset, we have used four types of video content, four resolutions (from 224p up to 1080p) and bit rates values ranging from 100 kbps up to 3000 kbps. Initially, we have encoded each of the four video streams into 78 different combinations of rates and resolutions, emulating the encoding operations at the broadcaster side. Then, we transcode each of these broadcaster-prepared videos into all the representations with lower resolutions and bit rates values than the original one. The overall number of these operations, representing the cloud-transcoding, was 12168. For each one of these operations, we have measured the CPU cycles required to generate the transcoded representation and we have estimated the end-viewers’ satisfaction using the Peak Signal to Noise Ratio (PSNR) score). We depict a basic sketch of these operations for one specific case where the broadcaster encoded its raw video with 720p resolution at 2.25 Mbps and we transcode it into a 360p video at 1.6Mbps.

We give below an appetizer of how these CPU cycles and satisfaction decibels vary with transcoding parameters. They show some examples of the kind of results that you will find in the dataset, here a broadcaster-prepared video of type “movie,” 1080p resolution and encoded at 2750 kbps. If you wonder how the rest of figures look like, 558 curves and their corresponding 12168 measurements of cycles of hard CPU work and decibels of viewers’ satisfaction are waiting for you in

October 13, 2014

Toward a new public higher education system

My previous post was quite harsh about the way the French government addresses the MOOC phenomenon. I would like now to be more constructive (and also to demonstrate that I am not only a moaner). So, basically, what would I do if I was French ministry of Higher Education! In short:
  • I'd shut down FUN. To be competitive, such project requires investment an order of magnitude greater than the planned fundings. When the objective is to attract tens thousands of students, there is no room for small players. 
  • I'd stop fundings through call for proposals. These calls grant people who know how to write proposals and who, in the best case, release results years later. Moreover, and most importantly, these calls do not give the sense of responsibility to university managers. French higher education institutions have to learn how to promote their best professors and to make them "MOOC-able" instead of begging to the government as if "make a MOOC" was a right.
  • I'd massively invest on French-friendly start-ups. The focus should be on three main domains where the position of France is today weak: an European-scale portal, tools for scalable learning, online student evaluation. The investment can be leaded by a structure such as BPI France.
In the following, I give my personal analysis of the context. I first decompose the traditional functions of a higher education institution, and analyze the challenges.
  • Define the topic of the courses. In France, the institutions conceive curriculum, which are then checked by academic accrediting agencies like ABET in US, or CTI and AERES in France. Shortly put, the curriculums target young people (named students) and aim at developing their employability. Several courses form a consistent curriculum. As for the MOOCs, students are mainly workers, with a large diversity of motivations. The course is a unit, which should be independent. The topics are focused. It is thus quite different, but not fundamentally challenging.
  • Select the students. This is the main asset of the Grandes Ecoles. However, MOOC are (expectedly) scalable, so you can teach an unlimited number of students. The question is no more to filter the best students before the course. The aim now is to have the right audience for the course: as many students as possible, with a high motivation for the topic and the right background. As said in my previous post, portals like Coursera are far better than any French higher education institution. 
  • Build the course. Every MOOC creators agree that building a scalable online course is quite different from a traditional course for a small, on-site, population. MOOCs require new categories of workers. But the role of the teacher is still prominent. So far, the teachers have worked in traditional higher education institutions.
  • Deliver the course. A building full of classrooms is useless. What is needed is a great, scalable, full-featured learning management tool. Moreover, you need a competitive team of developers to implement online exercices which have an added value and increase the student experience. Here, again, I don't think that any traditional french higher education institution can compete in providing such tool. Only a team of excellent super-committed software developers can do it.
  • Assist students during their learning experience. The challenge of MOOCs is to provide the same kind of assistance as for a traditional course with one professor and a dozen of students, although the number of students is in the order of thousands. The power of community is the lever.
  • Evaluate the students. When students are spread all over the world, it is impossible to organize exams the usual way. Companies like ProctorU have developed offers, where either exam rooms are available anywhere in the planet, or specific, secured, online tools allow anybody to be monitored as if she was on-site.
In the traditional model, all these functions are fulfilled by higher education institutions. In the new model related to MOOC, I foresee that traditional institutions will be outperformed by start-ups on a subset of functions: create a portal to attract students, develop a scalable learning platform, and evaluate students worldwide. These functions require strong skills in software development, in empowering a community of open-source developers, in promotional activities and marketing, in worldwide staff management, in agile development, in reliable online infrastructure, in website design. My claim is that neither universities nor public structures have any of the above skills.

Instead, I suggest to give a special mission to BPI France to make sure that funding goes to the most brilliant European start-ups related to education, in particular on the aforementioned functions (attract students worldwide, develop scalable learning platforms, evaluate students). By investing on European SMEs, the emergence of a champion is possible. And if the public force is one of the main investors, it may also ensure some of the "public missions" (e.g. almost free access to knowledge). Examples of such brilliant European start-ups include OpenClassroomIversity and FutureLearn.

On their side, the traditional French higher education institutions have to evolve. I like the analogy between MOOC and scientific books. Not all professors write books. Not all institutions ask their teaching staff to write books. Excellent professors (experts in some area, extremely brilliant as teachers) attract editors because the books they may write can become a success. It is thus up to the institutions to decide whether they should promote their excellent professors so that they may be detected by editors. Being "MOOC-able" is now a criteria for hiring professors in EPFL according to its director. This is the kind of shift French institutions have also to embrace.

October 8, 2014

The misconceptions behind the French FUN-MOOC portal

It is frequent that bloggers start their controversial posts with a disclaimer about how their personal opinion is not necessarily endorsed by their employers. In the case of this post, it is one step further: I am afraid that my opinion is the opposite of my employers' one.

I would like to talk about MOOC, you know, this innovation that may disrupt the higher education. My colleagues and I have been quite active in this area for the past couple of years, with a MOOC open on Spring 2013, and a contribution to two successful MOOCs.

One year ago, the French government decided to be pushy in this area, and thus to build from scratch a website named France Université Numérique (sorry no english translation for the wikipedia page yet), which aims at gathering MOOC in french from French higher education institutions in a free online portal. I summarize below some of the profound, symptomatic, critical misconceptions about what is innovation and Internet that this project demonstrates:
  • This public (state-funded) project emerges although some private French start-ups (e.g. OpenClassrooms and Unow) were just kick-starting in the MOOC area. For the set of young entrepreneurs who were trying to gain reputations and to convince universities and Grandes Ecoles to join the MOOC movement, the arrival of such a competitor changed the game. FUN is a de facto incumbent since higher education entities are also funded by government. FUN is a public non-profit action, so it is completely free without need of any business model. These start-up have managed to find their place in a new ecosystem nonetheless, but, in my opinion, FUN-MOOC did not help. When it will be time to cry about the lack of "French Coursera" (like most cry today about the lack of French Google), we shall not forget that the government actually prevents the raise of such a possible French success story by entering the market like a bull in a china shop. 
  • I am always sad when I realize that our political leaders still consider in 2014 that it is trivial to build a popular 24/7 Internet full-featured portal, and that it is trivial to manage a sophisticated professional online tool such as a massive, social, online course platform. It seems that the numerous failures of the public French websites have been quickly forgotten. FUN-MOOC was, and it is still now, a complete disaster. Typically, the portal has been shut down during two entire days in September 2014 for software upgrades. As can be expected from projects that are managed by people who know very few about the Internet and software, numerous shocking mistakes have been done, e.g. considering INRIA and a so-called SSII as a good team for the development of a software, and forking from the main open source online course platform project although the developer community was vivid and active.
  • The branding does not look like an important matter for those who initiated this project. The acronym is FUN (I guess there is a couple of references when you type FUN on google). The full name is French oriented, which is good in France and in some places in Africa, but is bad when you consider any other part of the world. It is hard to know whether it is a consequence, but we have almost no Swiss, nor Quebec students registered in our courses in french. More generally, if one wants to build a popular website, the branding is key. It looks like the government did the same mistake as the creator of lescopainsdavant. When you compete versus Coursera, Udacity and EdX, a FUN name is not a gift.
I was very doubtful about this initiative, and I publicly claimed it. I told some friends that the government would try to shut down FUN-MOOC in less than two years when the MOOC bubble would go down and when our political leaders would realize that FUN-MOOC is expensive and not necessarily good for the economic sector as a whole. Well, I was wrong, it took them only nine months to realize. Unfortunately, we don't have our happy end yet. Indeed, the government asks whether some higher education entities would be happy to maintain FUN-MOOC on behalf of the government. The very sad point about it is that a consortium of various French entities (including Institut Mines-Telecom) is a candidate. Let me continue the misconception list:
  • When it is time for innovation in general, a consortium of bureaucratic state-funded education entities is not the right vehicle. Exploring new business models, breaking the rules and embracing disruption are not in the DNA of public French universities, are they?
  • A consortium of universities is no better than a government for managing a 24/7 full-featured online portal nor a sophisticated professional online tool. Universities struggle to have decent websites and learning software. I don't see any reason for a success in such a project, whatever the funding.
  • The business model is, well, it is complicated, but with high probability it will be to complain that the government is not giving enough money for FUN-MOOC to work properly.
  • The management is typical of the crazy French higher education system, with consortium of consortiums of entities that do not like each other, a probable series of rebranding operations just to be sure that everybody gets lost, weird processes where nobody really knows who is in charge of what, especially about critical points like the promotion of the portal and the development of new features, and, most of all, the promises of hours-long meetings.
It seems to me that all the main mistakes that a higher education ministry and a set of public universities can do are being done. Hopefully, some will eventually succeed in stoping this crazy bureaucratic counter-productive process. I failed.

October 16, 2013

What encoding parameters for video representations in adaptive streaming?

Dynamic Adaptive Streaming (DASH) is a technology that has been implemented and deployed although the scientific literature was inexistent. Simply put, the server offers several representations of the same video ; clients can choose the representation that best fit their capacities. Since 2008, many researchers have deciphered the global behavior of client-based adaptive mechanisms. However, one key piece of the theoretical cake is still missing: what is the optimal set of video representations the server should offer?

As far as we know, there are no commonly accepted rules on how to choose the encoding parameters of each representation (resolution and rate). Providers typically use somewhat arbitrary rules of thumb or follow manufacturers’ recommendations (e.g. Apple and Microsoft), which do not take into account neither the nature of video streams, nor the user base characteristics. These parameters can however have a large influence on both user QoE and delivery cost.

With fellow researchers from EPFL (Laura and Pascal), we have recently investigated this topic from an optimization standpoint. The objective is to maximize the average user satisfaction. We formulated an optimization problem with the following inputs, which any content provider hopefully knows:
  • for each video in the catalog, the expected QoE of users for any rate-resolution. This can be easily obtained from a rate-distorsion curve computed on a sample of the video on every resolution.
  • for each video in the catalog, the characteristics of the population of viewers. I mean here the client device (tablet, TV, smartphone, ...) and the available bandwidth of the network connection (xDSL, fiber, 3G, ...). This requires an "a priori" knowledge of the viewer population, but we guess it can be obtained from previous statistics. 
  • the minimum ratio of viewers that must be served, i.e. the users who actually get a video, even at a relatively bad quality.
  • for the delivery part, the overall bandwidth budget that can be provisioned. Typically, we consider that the cost of the CDN should be bounded, and so the overall used bandwidth is bounded too.
  • finally, the total number of representations that we want to encode. The idea here is to limit the storage and encoding costs, and to avoid huge, hard-to-administer Manifest files.
We solved the problem on a set of synthetic configurations (the above inputs). Our goal was twofold: (i) measure the "performances" of recommended set of representations, and (ii) provide guidelines for content providers.

About the former goal, our observation is that recommended sets are not that bad in terms of average QoE but, for a given expected quality, the number of representations in these recommended sets is almost twice the number of representations in the optimal solutions. In other words, the average QoE is obtained at the price of more video representations, which mean more encoders, more storage, more delivery bandwidth in the CDN infrastructure, and more complexity in the management. We also showed that these recommended sets perform poorly for more specific configurations. For instance, a content provider specialized in live e-sport videos or a content provider targeting mobile phones must absolutely not follow recommendations.

We also derive from our analysis a series of guidelines. Some of them may be obvious, but it is never bad to recall obvious things, especially when nobody seems to follow them.
  1. How many representations per video? The repartition of representations among videos needs to be content-aware. Put emphasis on the videos that are the more complex to encode (e.g. sports)
  2. For a given video, how many representations per resolution? It mainly follows the distribution of devices in user population. Put a slight emphasis on highest resolutions.
  3. How to decide bit-rates for representations in a given resolution? The higher is the resolution, the wider should be the range of rates. Put emphasis on lower rates.
  4. How to save CDN bandwidth? Reduce the range of rates for representations in a resolution. Reduce the number of representations at high resolution.
These first results are just preliminary tests. We have plenty of new topics to explore. Stay tuned!

July 21, 2013

We forced students to enroll in a MOOC... and they liked it!

We made a MOOC, and it was all but easy in backstage. This MOOC was integrated in the regular curriculum of Telecom Bretagne students, so we kind of forced students to follow a MOOC. These students were neither volunteers nor MOOC-enthusiasts.

We just got feedbacks (i) from the traditional survey, which is performed by our administration every semester, and (ii) from a specific survey we conducted. Here is a short analysis.

Students enjoyed videos!
Students were unanimously positive for the MOOC although they were unanimously negative for other distant learning experiments, for example watching videos captured during a regular lecture (even with several cameras), or lectures through visioconference. As far as I know, it is the first time that Telecom Bretagne students are positive about a distant learning experiment.

To be honest, we did not expect feedbacks at this level of enthusiasm, especially with regard to the troubles we experienced during the course preparation. Typically, we received suggestions of replacing all regular lectures by MOOC videos. Some students enrolled in another (traditional) course about cellular networks did not attend that course because they preferred attending the MOOC instead. Less passionate but more useful, students were satisfied with the pace and the clarity of videos. They admitted they have worked more than expected overall but they did not especially complain about it. And students who are not French natives said that their level in French was sufficient to watch the video.

Of course, these results have to be validated by another experiment, but they confirm the high level of acceptance for KhanAcademy-like short videos.

Quizz matters, peer-reviewing does not
An MOOC is expected to be something more complex than just a bunch of videos. What we did in this MOOC was nothing spectacular: some quizz after video, a forum, some assignments, and a peer-reviewed system, which allowed students to review the assignments from other students. At the end of the day, how useful are these beyond-videos learning tools?

From our survey, quizz are what matters the most. The main purpose of these quizz is to offer students a way to check whether they were attentive during a video. In short, if you cannot answer the quizz, then you should probably watch the video again. Intuitively, quizz are not magical learning tools. But, think twice about it and recall when you were student. If you were sure the teacher would ask you a question in say five minutes, you would certainly be very focused on the teacher during these five minutes. Now think about a teacher asking you a question every five minutes! Today's quizz are very simple, but this positive feedback may encourage us to enhance quizz.

On the contrary, peer-reviewing has not been appreciated. Students did not find useful to review the assignments from other students, and they found even less useful to receive the reviews about their work from other anonymous students. I am disappointed because I had a lot of hopes in this learning tool, which is the most "connectivist" tool we implemented. Well, we have to work further on it!

It is not easy to take notes while watching videos
When we interviewed (very informally) students, a recurring object of worries was the notes. How to take notes although the videos is played? A video is not a lecture. It is focused and it does not include any time out. Almost any sentence matters and requires a note. Moreover, you cannot only listen, you must watch, at least a bit.

Students suffered from being unable to follow the videos and to take notes simultaneously. Some of them paused the video regularly. Some other played the video twice, one first time to get the global picture and a second one to take selected notes. From our survey, students playing videos more than twice are rare (less than 10%).

This feedback emphasizes that students have to acquire new methods in order to follow such video-based courses. Somehow students who have followed our MOOC got some specific skills, which will be useful if they have to follow other MOOC in their life (which is highly probable). Should we include some courses about how to follow a MOOC in the curriculum?

Multiple experiences are possible
There is not one unique way to follow a MOOC. We got confirmation if you had doubts about it.

We booked some classrooms with computers and headspeakers in the regular schedule. Some students told us they appreciated. They used to attend these "free" hours because it was for them a guarantee to maintain a regular learning pace. Some others of course worked by bursts, watching several hours of videos in one night when they got assignments. Overall, I like this freedom, which calls for unconstrained MOOC schedules.

Finally, a group of students told us they used to watch the video together. I can only imagine beers, chips, a TV screen... and MOOC videos! (debates should be lively for the quizz) This way of experiencing MOOC videos is great since it allows students to discuss the learning material. When we give a lecture in amphitheater, we usually ban in-class chats because we assume most of it is not related to the lecture. But chats among students can be useful. This experience is also opening questions about next-generation campuses: dedicated tiny classrooms equipped with a TV screen are options to consider.

May 27, 2013

I made a MOOC and I survived!

Xavier Lagrange, Alexander Pelov and I made a MOOC introducing Cellular Networks!

It is supposed to be a 20-hours course for students having a minimum background on networks. It attracted around 350 students, including 35 students from my institution for which this course is part of the curriculum.

I do not discuss here our motivations to create a MOOC and the way students have experienced it. I focus on the teacher's standpoint when making this MOOC.

We decided to make our own MOOC from scratch without using external products (except YouTube to host video). In other words, we did not use third-party companies like Coursera and Udacity, which host content, advertise it, ensure hotline for technical troubles, and so on.

Time Spent
On a very rough estimation, we spent 240 hours on this MOOC, including:
  • 20 hours preparing the pedagogical material. This part is actually enjoyable for teachers. To transform classic 3 hours-long lectures into 7-minutes-long to-the-point videos (+ quizz) is actually a nice challenge. Though, there is room to do more: we did not change much our exercises. Moreover the only collaborative tool we experimented is peer reviewing for homework. In other words, the transition to c-MOOC would require more time.
  • x hours interacting with students. Since our MOOC was not that crowded, x was close to epsilon but I guess there should be some formulas linking teacher interaction time and number of students.
  • 30 hours installing and testing the MOOC platform. We chose OpenMOOC because it was the only available, viable, open-source platform at that time. It is an overall good basis but it is relatively hard to install for people who are not familiar with server administration. Moreover, statistics modules are very incomplete. But, still, OpenMOOC is OK, it provides basic functionalities and the teacher interface is friendly.
  • 180 hours generating the video of courses, including:
    • Warm-up: it is all but easy to write on a tablet while watching a camera, to master all recording elements, to feel comfortable, to find the right tone and the right pace. For each teacher, the first tries recording videos were disastrous.
    • Recording: we made a lot of errors, for example to speak during twenty minutes with microphones off. Even when everything runs perfectly, speaking for such video is totally different from lectures. Overall, 2 minutes of recorded videos ended up into 1 minute of video that can be actually exploited.
    • Producing: we discovered how to use a studio software. With nowadays tools, it took us in average 10 minutes to deal with each recorded minute, so 20 minutes per each finally online minutes of video. And we had around 90 videos with average 6 minutes. 
  • 5 hours advertising. We were not affiliated with a well-known platform, so we needed to attract people (despite the aridness of the topic). Clearly, we did not do enough.

The teacher should first decide how to cut a full course into units, each unit being four to ten chunks, each chunk being a 7-minutes long video. We opted for the format that has been popularized by Khan Academy. This format is now widely used all over the platforms: the background is (almost) empty, the teacher speaks in the background, we sometimes see his/her face, and the most important point is that he/she writes on the slide when he/she speaks.

Our process was to first create the target slides, i.e. what we want to have at the end of the video. Then, we only kept what is actually hard to draw in real time, for example a hexagonal ceiling. This is the background slide. The goal of the video is to start with the background slide, to write on it, and to finish with something that is close to the target slides. During the recording, the target slides were displayed behind the camera so that the teacher does not forget anything (and look at the camera).

We decided to not use a prompter because we wanted to keep it as natural as possible. We did not write  the discourse in advance, but Xavier and Alex master the topic so well that they did not need it. Note that it is also possible to pause during the recording so that teacher can take time to think to the next sentences. It is also possible to repeat something in a better way when the previous sentences were not totally satisfactorily. These pauses and repeated sentences can be cut afterwards.

To generate the first videos, we used software, cams and microphones that we found in our shelves. We were able to generate some videos but the overall quality was borderline. Then, we got some extra fundings and we were able to get professional materials and to build our own studio. The quality is far better. Our studio includes a tablet, a powerful Mac with enough hard-disk (we needed around 1 Terabytes for this MOOC only), some wireless microphones, and a semi-professionnal camera.

Stuff I Would Have Made In a Different Way
I put here miscellaneous thoughts:
  • We would have chosen a sexier title. In our case, it would have been appropriate to include a buzzword like LTE, LTE-advanced or femtocells in the title. We identified three ways to attract a large population of students: 
    • The MOOC is affiliated with a top-ranked university, which knows how to advertise, or with a highly-visible platform like Coursera. These websites attract millions of visitors, so any course can enroll thousands of students. It is possible that these enrolled people are less committed to complete the course though;
    • The MOOC is about a very trendy topic, say quantum computing, software-defined networks or any other buzzword. It has to be reminded that the majority of "MOOC students" are professionals who want to keep in touch with new topics they heard about;  
    • You make the buzz about your MOOC. We spent only 5 hours advertising and we are not professional. Press and web buzz campaigns is a way. It is also possible to convince fellows from other universities to make their students enroll your MOOC.
  • We would have found a better video format. Let's be honest: without a dedicated team, it is a indecently long and fastidious to create Khan Academy-like videos. We went too much when it was about videos. Compare this video and this other video. It took us 20 minutes to produce each minute of the former video while it took us only 4 minutes for the latter. Is it worthwhile? Based on our experience, it is probably possible to divide by at least 3 the overall time spent on video.
Overall, it was a great experience. We learned a lot about the potentials of such online courses and we had a lot of fun playing with videos. We developed a lot of nice ideas for the next MOOC, and we significantly improved the process of video recording and editing.

But it was also a huge investment. Xavier told me that making this MOOC was as demanding as writing a book. I often compare books with MOOCs when I have to explain our motivations to do MOOC. Both are knowledge, both are supposed to be done by experts, both target a wide population… it seems that both require very committed authors.