Yesterday, YouTube gave video metrics to their users. If you have uploaded videos to YouTube, you can go to your video list and click “About this video” to see a history of view counts. Very simple, but a good move.
It is great to see YouTube provide this service, even if just for your own, personally uploaded videos. It validates the newly emerging industry category of “online video metrics“, that Vquence is also a part of.
Our colleagues from VisibleMeasures expressed a similar feeling in their blog entry saying: “we view anything that companies can do to help showcase the need and improve the landscape for video measurement as a plus for the entire ecosystem”. I couldn’t express it any better.
Following the blogging community, there is a large need for online video metrics, both for tracking your own published videos – as YouTube has started providing since yesterday – as well as tracking videos published by the market generally for market analysis and intelligence reasons.
The number of players in the field is still small and FAIK we are the only Australians to offer these services.
U.S. spending on internet video advertising alone is expected to grow to US$4.3 billion by 2011. The need for online video publications is predicted to grow even stronger in the near future when each and every Website will be expected to use video to communicate their message. The need for video metrics will increase enormously.
Check out our new Website if you want to learn more about how Vquence measures video.
If you’re a student and keen to get more open media technology to the Web, apply for a Google summer of code project with Xiph. There are also a few Annodex-style projects in the mix, which bring annotations and metadata to Ogg.
Of the list of proposed projects, my personal favorite is OggPusher – a browser plugin for transcoding video to Theora. Imagine an online service for transcoding video to Ogg Theora without having to worry about having all the libraries installed.
You also have the chance to propose your own project to the Xiph/Annodex guys – you just need to find somebody who is willing to mentor you, so hop on irc channel #xiph at freenode.net and start discussing.
Incidentally, Google is providing a financial reward for successful conclusion of a project – but don’t let that be your only motivation. If you’re not in it with your passion, don’t do a GSoC project. This is about interacting with an open source community whose goals you can identify with. Become involved!
At Vquence, until now we have used the Thumbstacks chart library for our graphs. TSChartlib is a simple open source charting library that uses the HTML canvas and an IE hack to create its graphs.
Vquence is now getting real serious with charts and graphs and we were thus looking for a more visually compelling and more flexible alternative. If you do a google search for “online charting library“, you get a massive amount of links to proprietary systems (admittedly, some of them offer the source code if you pay a premium). I will not be listing them here – go find them for yourself. However, the world of decent open source charting libraries is relatively small, so I want to share the outcome of my search.
There is the Open Flash Chart libary, which provides charting functionality for flash or flex programmers. The charts look rather nice and have overlays on the data points, which is something I missed thoroughly from TSChartlib.
Then there is JFreeChart, a 100% Java chart library that makes it easy for developers to display professional quality charts in their applications. Another Java charting library is JOpenChart. Incidentally, there’s a whole swag of further Java libraries that do charts and graphing. However, we are not too keen on Java for Web technologies.
Outside our area of interest, there are also open source chart libraries in C#, but C#/.NET is not a platform we intend to support, so these were out of the question.
Our choice came down to the “Open Flash Chart” library vs “Plotkit”. Of the two, the Flash library and technology seems more mature, easier to use, and creates sexier charts. Also, we can sensibly expect all Vquence users to have Flash installed, while we cannot expect the same to be true for SVG. However, I was fascinated by the flexible use of SVG and HTML Canvas and will certainly get back to it later, when I expect it to have matured a bit more.
Our choice of the Open Flash Chart was further facilitated by a rails plugin for it. Score!
Of course: I might have totally missed some obvious or better alternatives. So, leave me a reply if you think I did!
As mentioned earlier, Vquence took part in the Australian Startup Carnival and winners are now announced.
The feedback we got from the judges is encouraging. It’s great to see that Vquence is indeed providing a useful tool. But we are also aware that the service offering is not complete and needs a lot more tech development.
I’d like to address the concern of one judge that we are dependent on YouTube’s goodwill to keep their access open. This is not the case. Not only has YouTube just in the last days opened up their API even further, so I don’t think there’s a risk there. But in general: closing access to content is not what the Web is about – on the contrary – Yahoo is just opening up their search platform and Tim Berners-Lee’s Semantic Web will enable an even more open exchange of data between different sites. However, Vquence does not rely solely on the availability of such data interfaces. That would be dumb. Where we cannot use APIs or RSS feeds or other data interfaces, we can always parse plain video Web pages, just like Google’s search engine parses Web pages. In short: our life would be harder without open interfaces, but not impossible.
As for the position that Vquence achieved in the Australian Startup Carnival: Vquence came in 5th position out of 28 participants, which is great, in particular since we are currently in a transition phase towards video metrics.
Late last year at Xiph we worked over our mime types and file extensions for Xiph content. The new set avoids using .ogg for everything and gives the different Xiph audio files a .oga (audio/ogg) and the Xiph video files a .ogv (video/ogg) extension, while using .ogx for more generic multiplexed content in Ogg. It’s important to separate between audio-only and video files – the codecs inside don’t really matter as much to select the appropriate application to start for using the file.
Today I read Fabian’s blog entry – one up for Ubuntu for getting behind it: https://bugs.edge.launchpad.net/ubuntu/+bug/201291 rock!
I have spent this weekend giving my Blog a work-over and extending the page set about myself – something I have been putting off for 5 years!
I have quite an extensive list of publications, so have been wondering for a while how to publish them such that it will be easy to manage. I have used a nice little WordPress plugin called “List Subpages” to get the hierarchical set of pages onto the site in a nicely structured way. I’m actually missing a RSS feed of these pages, but that’s not a major problem. I’m happy with what this provides for now – it’s a good start. It will make those people happy that have been bugging me for my publications and to whom my sad reply has always been to send them an extract of my CV.
I still need to add some of the abstracts of the publications into the posts, and complete the links to where you can download them. More work for further weekends. This is a good start for now and I am happy with it.
The revolution is here and now! If you thought you’ve seen it all with video web technology, think again.
Michael Dale and Aphid (Abram Stern) have published a plugin for Mediawiki called Metavidwiki which is simply breathtaking.
It provides all of the following features:
- wiki-style timed annotations including links to other resources
- a cool navigation interface for video to annotated clips
- plain text search for keywords in the annotations
- search result display of video segments related to the keywords with inline video playback
- semantic search using speaker and other structured information
- embedding of full video or select clips out of videos into e.g. blogs
- web authoring of mashups of select clips from diverse videos
- embedding of these mashups (represented as xspf playlists)
- works with Miro through providing media RSS feeds
Try it out and be amazed! It should work in any browser – provide feedback to Michael if you discover any issues.
All of Metavidwiki is built using open standards, open APIs, and open source software. This give us a taste of how far we can take open media technology and how much of a difference it will make to Web Video in comparison to today’s mostly proprietary and non-interoperable Web video applications.
The open source software that Metavidwiki uses is very diverse. It builds on Wikipedia’s Mediawiki, the Xiph Ogg Theora and Vorbis codecs, a standard LAMP stack and AJAX, the Annodex apache server extension mod_annodex, and is capable of providing the annotations as CMML, ROE, or RSS. Client-side it uses the capabilities of your specific Web browser: should you run the latest Firefox with Ogg Theora/Vorbis support compiled in, it will make use of this special capability. Should you have a vlc browser plugin installed, it will make use of that to decode Ogg Theora/Vorbis. The fallback is the java cortado player for Ogg Theora/Vorbis.
Now just imagine for a minute the type of applications that we will be able to build with open video APIs and interchangable video annotation formats, as well as direct addressing of temporal and spatial fragments of media across sites. Finally, video and audio will be able to become a key part in the picture of a semantic Web that Tim Berners-Lee is painting – a picture of open and machine-readable information about any and all information on the Web. We certainly live in exciting times!!!
Today, for the millionth time I had to listen to a statement that goes along the following lines: “CMML technology is not ideal for media annotations, because the metadata is embedded with the object rather than separate”.
For once and all let me shout it out: THIS IS UTTER BULLSHIT!
I am so sick of hearing this statement from people who criticise CMML from a position of complete lack of understanding. So, let me put it straight.
While it is true that CMML has the potential to be multiplexed as a form of timed text inside a media file, the true nature of CMML is that it is versatile and by no means restricted to this representation.
In fact, the specification document for CMML is quite clearly a specification of a XML document. CMML is in that respect more like RSS than a timed text format.
Further, I’ll let you in on a little secret: CMML can be stored in databases. Yes!! In fact, CMMLWiki, one of the first online media applications that were implemented using Annodex, uses a mysql database to store CMML data. The format in which it can be extracted depends on your needs: you can get out single field content, you can put it in an interchangeable XML file (called CMML), or you can multiplex it with the media data into an Annodex file.
The flexibility of CMML is it’s beauty! It was carefully designed to allow it to easily transform between these different representations. It’s powerful because it can easily appear in all these different formats. By no means is this “not ideal”.
Vquence was today presented on the “Australian Startup Carnival” site – go, check it out.
There are 28 participants to the startup carnival and each one of them is being introduced through an interview that was taken electronically. Questions for this interview were rather varied and detailed. They included technical and system backgrounds as well as asking for your use of open source software.
All the questions you have always wanted to ask about Vquence, and a few more.
UPDATE: The Startup Carnival has announced the prizes and they are amazing – first prize being an exhibition package at CeBIT. Good luck to us all!!