Markdown vs reStructuredText for teaching materials

Featured image: Brandi Redd | Unsplash (photo)

Back in summer 2017. I wrote an article explaining why we used Sphinx and reStructuredText for producing teaching materials and not a wiki. In addition to recommending Sphinx as the solution to use, it was a general praise for generating static HTML files from Markdown or reStructuredText.

This summer I made the conversion of teaching materials from reStructuredText to Markdown. Unfortunately, the automated conversion using Pandoc didn’t quite produce the result I wanted so I ended up cooking my own Python script that converted the specific dialect of reStructuredText that was used for writing the contents of lab.miletic.net and fixing a myriad of inconsistencies in writing style that accumulated over the years.

reStructuredText as the obvious choice for the software documentation

I personally preferred to write reStructuredText, which I found to be more powerful and better standardized than Markdown (I have heard the same is true about AsciiDoc though I haven’t personally used it). When we forked rDock to start RxDock, reStructuredText and Sphinx were the obvious choice for its documentation. A good argumentation why would a software developer prefer reStructuredText over Markdown for software documentation is given in a very fine article written by Victor Zverovich. He mentions two main advantages, first one being:

reStructuredText provides standard extension mechanisms called directives and roles which make all the difference. For example, you can use the math role to write a mathematical equation (…) and it will be rendered nicely both in HTML using a Javascript library such as MathJax and in PDF via LaTeX or directly. With Markdown you’ll probably have to use MathJax and HTML to PDF conversion which is suboptimal or something like Pandoc to convert to another format first.

(For what it’s worth, this has now been addressed by PyMdown Extension Arithmatex, which is easy to enable when using MkDocs with Material theme.)

The second advantage mentioned by Zverovich is very useful for software documentation and a feature that would be only nice to have elsewhere:

In addition to this, Sphinx provides a set of roles and directives for different language constructs, for example, :py:class: for a Python class or :cpp:enum: for a C++ enum. This is very important because it adds semantics to otherwise purely presentational markup (…)

Markdown as the obvious choice elsewhere

Despite recommending reStructuredText for software documentation, Victor opens his blog post with:

In fact, I’m writing this blog post in Markdown.

It’s the obvious choice since GitHub Pages offers the Markdown to HTML conversion so you can worry about writing the contents, but the same feature isn’t available for reStructuredText and AsciiDoc. Unfortunately for rST, GitLab supports Markdown and AsciiDoc, but not reStructuredText (it has been requested 5 years ago). (However, GitLab Pages supports almost anything you can imagine thanks to GitLab CI/CD, including Sphinx.)

And it’s a similar story elsewhere. Reddit? Markdown. Slack and Mattermost? Both Markdown. Visual Studio Code supports Markdown without any extensions (but there are 795 of them available if you feel that something you require is not there, compared to 21 for reStructuredText) and it’s a very popular choice among my colleagues and students. Also, there is nothing like HackMD for reStructuredText or AsciiDoc that I know of.

Obviously, many of these tools weren’t around when we switched to Sphinx back in 2014. However, now that they are here to stay, Markdown is starting to look as a better choice among the two.

Moving from reStructuredText to Markdown for teaching materials

In my particular case, the straw that broke the camel’s back and made me decide to convert the teaching materials from reStructuredText to Markdown was the student contribution of ZeroMQ exercises for the Distributed systems course (not included yet). I asked the student to write reStructuredText, but got the materials in Markdown and I can understand why is that. Let’s say that student wanted to do things properly in reStructedText and Sphinx. The procedure is this:

1. Git clone the repository.
2. Open the folder in your favorite editor, say VS Code, notice it doesn’t highlight rST out of the box, no problem there is the extension, right?
3. Install the reStructuredText extension (homepage), close all NotImplemented exception notes that appear when opening the project.
4. Open a file just the get the feeling of how rST should look. Try to preview it. Unknown directive type “sectionauthor”. Never mind, it’s just one command that is unsupported.
5. The code isn’t highlighted. Oh well, it’s not a show stopper.
6. Well, the is more errors in preview. Never mind, the compile is the real preview. Let’s compile things every time something is changed.
7. (…)
8. Send the changes by e-mail or git add, git commit, and git push.

Compare that with the Markdown workflow:

1. Git clone the repository.
2. Open the folder in VS Code and start writing.
3. Send the changes by e-mail or git add, git commit, and git push.

To be fair, VS Code Markdown preview is not rendering Admonitions, but that’s how it goes with the language extensions. Still, it’s much easier to get started with Markdown and MkDocs than with reStructuredText and Sphinx if you are new to documentation writing, which is the case with most of the students.

There are a number of other things I like:

• Material theme for MkDocs is awesome. It’s a set of extensions in addition to a good looking theme.
• Integrated search is designed as “find as you type” and provides a much better user experience.
• Much faster building. It takes 11 seconds to build the group website with MkDocs, while it took 37 seconds to build the older version of the same website with Sphinx.
• Builtin GitHub Pages deployment functionality. It’s possible to do the same with Sphinx, but it’s much nicer to have it builtin and maintained.
• Automatic building of the sitemap. (There’s an extension for Sphinx that does the same.)

Overall, I am very satisfied with the results and I’m looking forward to using Markdown for writing teaching materials in the future. I’ll continue to write RxDock documentation in reStructuredText since fancy cross-references and numbered equation blocks are very easy to do in reStructuredText. In addition, there is the official way to produce PDF output via LaTeX, which is quite important to have for proper scientific software documentation. Also, the potential contributors in this case are somewhat experienced with documentation tools and can usually find their way around with reStructuredText and Sphinx so it’s not that much of an issue.

Mirroring free and open source software matters

Featured image: Patrick Tomasso | Unsplash (photo)

Post theme song: Mirror mirror by Blind Guardian

A mirror is a local copy of a website that’s used to speed up access for the users residing in the area geographically close to it and reduce the load on the original website. Content distribution networks (CDNs), which are a newer concept and perhaps more familiar to younger readers, serve the same purpose, but do it in a way that’s transparent to the user; when using a mirror, the user will see explicitely which mirror is being used because the domain will be different from the original website, while, in case of CDNs, the domain will remain the same, and the DNS resolution (which is invisible to the user) will select a different server.

Free and open source software was distributed via (FTP) mirrors, usually residing in the universities, basically since its inception. The story of Linux mentions a directory on ftp.funet.fi (FUNET is the Finnish University and Research Network) where Linus Torvalds uploaded the sources, which was soon after mirrored by Ted Ts’o on MIT’s FTP server. The GNU Project‘s history contains an analogous process of making local copies of software for faster downloading, which was especially important in the times of pre-broadband Internet, and it continues today.  Many Linux distributions, including this author’s favorite Debian and Fedora use mirroring (see here and here) to be more easily available to the users in various parts of the world. If you look carefully at those lists, you can observe that the universities and institutes host a significant number of mirrors, which is both a historical legacy and an important role of these research institutions today: the researchers and the students in many areas depend on free and open source software for their work, and it’s much easier (and faster!) if that software is downloadable locally.

Furthermore, my personal experience leads me to believe that hosting a mirror as a university is a great way to reach potential students in computer science. For example, I heard of TU Vienna thanks to ftp.tuwien.ac.at and, if I was willing to do PhD outside of Croatia at the time, would certainly look into the programs they offered. As another example, Stanford has some very interesting courses/programs at the Center for Computer Research in Music and Acoustics (CCRMA). How do I know that? They went even a bit further than mirroring, they offered software packages for Fedora at Planet CCRMA. I bet I wasn’t the only Fedora user who played/worked with their software packages and in the process got interested to check out what else they are doing aside from packaging those RPMs.

That being said, we wanted to do both at University of Rijeka: serve the software to the local community and reach the potential students/collaborators. Back in late 2013 we started with setting up a mirror for Eclipse; it first appeared at inf2.uniri.hr/mirrors and later moved to mirrors.uniri.hr, where it still resides. LibreOffice was also added early in the process, and Cygwin quite a bit later. Finally, we started mirroring CentOS‘s official and alternative architectures as a second mirror in Croatia (but the first one in Rijeka!), the first Croatian one being hosted by Plus Hosting in Zagreb.

University’s mirrors server already syncs a number of other projects on a regular basis, and we will make sure we are added to their mirror lists in the coming months. As it has been mentioned, this is both an imporant historical legacy role of a university and a way to serve the local community, and a university should be glad to do it. In our case, it certainly is.

Why use reStructuredText and Sphinx static site generator for maintaining teaching materials

Featured image: Les Anderson | Unsplash (photo)

Yesterday I was asked by Edvin Močibob, a friend and a former student teaching assistant of mine, the following question:

You seem to be using Sphinx for your teaching materials, right? As far as I can see, it doesn’t have an online WYSIWYG editor. I would be interested in comparison of your solution with e.g. MediaWiki.

While the advantages and disadvantages of static site generators when compared to content management systems have been written about and discussed already, I will outline our reasons for the choice of Sphinx below. Many of the points have probably already been presented elsewhere.

Starting with MoinMoin

Teaching materials for the courses some of my colleagues and I used to teach at InfUniRi and RiTeh, including laboratory exercises for the Computer Networks 2 course developed during early 2012, were initially put online using MoinMoin. I personally liked MoinMoin because it used flat text files and required no database and also because it was Python-based and I happen to know Python better than PHP.

During the summer of 2014, the decision was made to replace MoinMoin with something better because version 1.9 was lacking features compared to MediaWiki and also evolving slowly. Most of the development effort was put in MoinMoin version 2.0, which, quite unfortunately, still isn’t released as of 2017. My colleagues and I especially cared about mobile device support (we wanted responsive design), as it was requested by students quite often and, by that time, every other relevant actor on the internet had it.

The search for alternatives begins

DokuWiki was a nice alternative and it offered responsive design, but I wasn’t particularly impressed by it and was also slightly worried it might go the way of MoinMoin (as of 2017, this does not seem to be the case). It also used a custom markup/syntax, while I would have much preferred something Markdown/reStructuredText-based.

We really wanted to go open with the teaching materials and release them under a Creative Commons license. Legally, that can be done with any wiki or similar software. Ideally, however, a user should not be tied to your running instance of the materials to contribute improvements and should not be required to invest a lot of effort to set up a personal instance where changes can be made.

MediaWiki was another option. Thanks to Wikipedia, MediaWiki’s markup is widely understood, and WYSIWYG editor was at the time being created.

In an unrelated sequence of events I have set up a MediaWiki instance in BioSFLab (where I also participated in research projects for almost two years) and can say that setting up such an instance presents a number of challenges:

When migrating a MediaWiki instance from a server to another server, you have to dump/restore the database and adjust the config files (if you’re lucky it won’t be required to convert Apache configuration directives to Nginx ones or vice versa). None of this is especially complicated, but it’s extra work compared to flat file wikis and static websites.

Finally, my favorite MediaWiki theme (skin in its terminology) is Vector, so my potential wiki with teaching materials would look exactly like Wikipedia. While nice and trendy, it is not very original to look like Wikipedia.

Going static, going reStructured

Therefore, we opted to use Sphinx and reStructuredText, as it was and still is a more powerful format than Markdown. We specifically cared about the built-in admonitions, which made it easier for us to convert the existing contents (Python socket module lecture is a decent example). The advantages of Sphinx were and still are the following:

There is a number of issues which affected us:

• the time to deployment after the change: varies depending on the change, but it’s in the order of tens of seconds in the worst case,
• the need to automate the deployment upon git push (note that this does not increase attack surface, since git uses SSH or HTTPS for authentication and transfer).
• learning curve to add content: MediaWiki’s WYSIWYG editor beats using git and reStructuredText in terms of simplicity.

Conclusion

A rule of thumb here would be:

• if many people inside of an organization are going to edit content a lot and the content is more like notes than a proper documentation, then MediaWiki (or DokuWiki) is the choice,
• if the content has an obvious hierarchy of parts, chapters, sections etc. and/or it is evolving like a piece documentation changes with software it documents, then Sphinx (or any of Markdown-based generators, e.g. HotDoc or MkDocs) will do a better job.

Enabling HTTP/2, HTTPS, and going HTTPS-only on inf2

Featured image: John Moore | Unsplash (photo)

Inf2 is a web server at University of Rijeka Department of Informatics, hosting Sphinx-produced static HTML course materials (mirrored elsewhere), some big files, a WordPress instance, and an internal instance of Moodle.

HTTPS was enabled on inf2 for a long time, using a self-signed certificate. However, with Let’s Encrpyt coming into public beta, we decided to join the movement:

HTTPS was optional. Almost a year and a half later, we also enabled HTTP/2 for the users who access the site using HTTPS. This was very straightforward.

Mozilla has a long-term plan to deprecate non-secure HTTP. The likes of NCBI (and the rest of the US Federal Government), Wired, and StackOverflow have already moved to HTTPS-only. We decided to do the same.

Configuring nginx to redirect to https:// is very easy, but configuring particular web applications at the same time can be tricky. Let’s go through them one by one.

Sphinx-produced static content does not hardcode local URLs, and the resources loaded from CDNs in Sphinx Bootstrap Theme are already loaded via HTTPS. No changes were needed.

WordPress requires you to set the https:// URL in Admin, Settings/General. If you forget to do so before you go HTTPS only, you can still use the config file to adjust the URL.

Moodle requires you to set \$CFG->wwwroot in the config file to https:// URL of your website.

And that’s it! Since there is a dedicated IP address used just for the inf2.uniri.hr domain, we can afford to not require SNI support from the clients (I’m sure that both of our Android 2.3 users are happy for it).

How to watch Russia Today RT Live On Air without Flash

Featured image: Gianpaolo La Paglia | Unsplash (photo)

What is RT?

RT (formerly Russia Today) is an international TV network operated by the Russian government. Right now, it fits among a number of alternative media sources which are on the rise given the falling trust in the mainstream media among the western world (as an example, here is 2015 data for the USA).

RT’s content has been criticized from multiple angles. Regardless, I find it to be a valuable source of news; among other things, I particularly like that they frequently feature commentators like Ron Paul (for nearly 8 years already!) and John McAfee.

RT requires Flash Player plugin for watching their live audio/video streams

(Update 25th of May 2017: RT Live does not require Flash anymore! Just open it in any HTML5-compliant browser and the video stream will start playing.)

However, RT Live (so called On Air) requires you to use Flash Player plugin and does not offer HTML5 video, which is sub-optimal at best. RT uses HTTP Live Streaming (HLS) in JW Player for 5 channels (News, USA, UK, Arabic, and Documentary) and YouTube for one (Spanish). Knowing how JW Player HLS configuration looks allows us to use some hackery to dig up the stream URL.

On Air page embeds the player from another page and the URLs are in variable streams.hls which is set in static/libs/octoshape/js/streams/news.js. Digging through the same file for each of the other channels will uncover their URLs as well (I have listed all of the URLs below). A simpler solution would be to have Flash Player plugin installed (yuck, proprietary software!) and use Video DownloadHelper, which can uncover the URLs accessed by Flash Player plugin while the video is playing.

To play the streams from URLs listed, you can use an open source media player such as mpv, MPlayer, or VLC. I have tested and confirmed all three work with the URLs listed below. If you don’t like those three, there are plenty more options.

RT News

RT News, as well as RT America and RT UK, are served using a CDN from Level 3 Communications. Unfortunately, we are limited to receiving the stream over the unencrypted HTTP.

HD: http://rt-eng-live.hls.adaptive.level3.net/rt/eng/index2500.m3u8
Hi: http://rt-eng-live.hls.adaptive.level3.net/rt/eng/index1600.m3u8
Medium: http://rt-eng-live.hls.adaptive.level3.net/rt/eng/index800.m3u8
Low: http://rt-eng-live.hls.adaptive.level3.net/rt/eng/index400.m3u8
Audio: http://rt-eng-live.hls.adaptive.level3.net/rt/eng/indexaudio.m3u8

RT America

HD: http://rt-usa-live.hls.adaptive.level3.net/rt/usa/index2500.m3u8
Hi: http://rt-usa-live.hls.adaptive.level3.net/rt/usa/index1600.m3u8
Medium: http://rt-usa-live.hls.adaptive.level3.net/rt/usa/index800.m3u8
Low: http://rt-usa-live.hls.adaptive.level3.net/rt/usa/index400.m3u8
Audio: http://rt-usa-live.hls.adaptive.level3.net/rt/usa/indexaudio.m3u8

RT UK

HD: http://rt-uk-live.hls.adaptive.level3.net/rt/uk/index2500.m3u8
Hi: http://rt-uk-live.hls.adaptive.level3.net/rt/uk/index1600.m3u8
Medium: http://rt-uk-live.hls.adaptive.level3.net/rt/uk/index800.m3u8
Low: http://rt-uk-live.hls.adaptive.level3.net/rt/uk/index400.m3u8
Audio: http://rt-uk-live.hls.adaptive.level3.net/rt/uk/indexaudio.m3u8

RT Arabic

RT Arabic and RT Documentary use a different CDN (though operated by Level 3 Communications, just like the CDN for the first three channels). This CDN offers HTTPS in addition to the unencrypted HTTP.

HD: https://rt-ara-live-hls.secure.footprint.net/rt/ara/index2500.m3u8
Hi: https://rt-ara-live-hls.secure.footprint.net/rt/ara/index1600.m3u8
Medium: https://rt-ara-live-hls.secure.footprint.net/rt/ara/index800.m3u8
Low: https://rt-ara-live-hls.secure.footprint.net/rt/ara/index400.m3u8
Audio: https://rt-ara-live-hls.secure.footprint.net/rt/ara/indexaudio.m3u8

RT Documentary

HD: https://rt-doc-live-hls.secure.footprint.net/rt/doc/index2500.m3u8
Hi: https://rt-doc-live-hls.secure.footprint.net/rt/doc/index1600.m3u8
Medium: https://rt-doc-live-hls.secure.footprint.net/rt/doc/index800.m3u8
Low: https://rt-doc-live-hls.secure.footprint.net/rt/doc/index400.m3u8
Audio: https://rt-doc-live-hls.secure.footprint.net/rt/doc/indexaudio.m3u8

RT Spanish

RT Spanish is broadcasted live via YouTube, therefore it already does not require Flash Player plugin. If you want to watch it outside the browser, the players mentioned above should be able to do that.

And that’s it. No Flash Player plugin required! Hopefully, RT will start offering HTML5 video at some point and allow watching live content without requiring Flash Player plugin. You can try bugging them about it, as I did over Twitter.

(Update: added the note regarding RT Spanish.)

Guest post: celebrating Graphics and Compute Freedom Day

Featured image: Wesley Caribe | Unsplash (photo)

Hobbyists, activists, geeks, designers, engineers, etc have always tinkered with technologies for their own purposes (in early personal computing, for example). And social activists have long advocated the power of giving tools to people. An open hardware movement driven by these restless innovators is creating ingenious versions of all sorts of technologies, and freely sharing the know-how through the Internet and more recently through the social media. Open source software and more recently hardware is also encroaching upon centers of manufacturing and can empower serious business opportunities and projects.

The free software movement is cited as both an inspiration and a model for open hardware. Free software practices have transformed our culture by making it easier for people to become involved in producing things from magazines to music, movies to games, communities to services. With advances in digital fabrication making it easier to manipulate materials, some now anticipate an analogous opening up of manufacturing to mass participation.

It is important to keep sharp open hardwares more transformational edges, on agendas such as dismantling intellectual property and releasing investment for alternative business models. Only through a mix of craft, politics, and the support of social movements, will open hardware fully realise its potential to democratise technology.

There are numerous organizations and initiatives voiced and supported by the Open Source Hardware Association and a vast thriving community of supporters and technological enthusiasts that are doing to advance this core value. The Open Source Hardware Association aims to be the voice of the open hardware community, ensuring that technological knowledge is accessible to everyone, and encouraging the collaborative development of technology that serves education, environmental sustainability, and human welfare.

Technology and culture ought to respect user freedom. A year ago, AMD made a giant leap towards a fully open source Linux graphics and compute stack driver stack. While still offering proprietary hardware running proprietary firmware, having the driver and the libraries as open source opens potential for modification and performance optimization. In addition, it gives other GPU manufacturers, including NVIDIA and Intel, a standard to aim for. Finally, it gives hope that there will be further openness in the future.

This is why we celebrate Graphics and Compute Freedom Day, GCFD. We want to take one day every year to remember all the open standards, open source software, and open hardware that have made it into the mainstream in the field of computer graphics and GPU computing. It has been exactly one year since AMD has unveiled GPUOpen on 15th December 2015; let’s celebrate GCFD and let’s hope that this year is just a start of many more successful years of graphics and compute freedom.

GCFD will be hosting a livestream starting at 14:30 in Central European Time. Join us.

Što je prvi hrvatski predsjednik rekao o prijetnjama slobodi softvera otvorenog koda?

Naslovna slika: Justin Luebke | Unsplash (fotografija)

Poznati govor prvog hrvatskog predsjednika dr. Franje Tuđmana u Zračnoj luci Zagreb 23. studenog 1996. je jako dobro strukturiran. Stoga je na tekstu govora vrlo lako izvesti search&replace koji mijenja njegov sadržaj, ali zadržava formu. Rezultat nakon nepretjeranog drljanja mi izgleda prilično upotrebljivo:

Mi nećemo dopustiti ostacima vlasničkih Unixa, niti Microsofta, stanje kakvo smo bili zatekli u računarstvu uspostavom slobode softvera i otvorenog koda. Nećemo dopustiti da nam sve to dovedu u pitanje. Nećemo to dopustiti tim ostacima vlasničkih Unixa, ali ni onim tehnološkim redikulima, bezglavim smušenjacima koji ne vide o čemu se zapravo radi danas u slobodnom softveru i u svijetu sa kojekakvim GitHub projektima… Nećemo dopustiti onima koji se vežu i s raznobojnim vragom protiv slobode softvera i otvorenog koda, ne samo s raznobojnim, nego i crvenim i crnobijelim vragovima… Nećemo dopustiti onim koji se povezuju sa svima protivnicima slobodnog softvera, ne samo povezuju nego im se nude, ne samo da im se nude nego im se prodaju za Secure Boot, DRM i softverske patente, kao što se i sami hvale da dobivaju tehnologiju iz svih laboratorija svijeta, a povezuju se od ekstremista zatvorenosti, do kojekakvih lažnih hipstera, pseudootvorenih obmanjivača koji nam danas propovijedaju velike ideje o pravima korisnika i otvorenim standardima.

Da! Mi smo stvarali svoju slobodu za prava korisnika i za otvorene standarde, ali za prava korisnika prije svega većine korisnika slobodnog softvera. Ali ćemo, razumije se, mi sa tom slobodom softvera i otvorenim kodom osigurati i korisnicima neslobodnog softvera ta prava i otvorene standarde. Ali nećemo dopustiti da nam ti sa strane rješavaju, odnosno nameću rješenja. Slobodan softver neće biti pod kontrolom nijedne kompanije. Unix je dosta bio i pod Berkeleyem i pod AT&T-em, i pod Sunom i pod IBM-om, i pod SGI-em. Zajednica oko Linuxa je izborila svoju slobodu, svoju samostalnost, svoje pravo da sama odlučuje o svojoj sudbini.

Izvorni tekst moguće je pronaći na Wikiizvoru.

The academic and the free software community ideals

Featured image: davide ragusa | Unsplash (photo)

Today I vaguely remembered there was one occasion in 2006 or 2007 when some guy from the academia doing something with Java and unicode posted on some mailing list related to the free and open source software about a tool he was developing. What made it interesting was that the tool was open source, and he filed a patent on the algorithm.

Few searches after, boom, there it is

Google is a powerful tool. The original thread from March 2007 on (now defunct) linux-utf8 mailing list can be found on The Mail Archive. The software website is still up. The patent is out there as well.

Back in 2007 I was in my 3rd year of undergraduate study of mathematics (major) and computer science (minor), used to do Linux workshops in my spare time, and was aiming to do a PhD in mathematics. I disliked the usage and development of proprietary research software which was quite common in much of computer science research I saw back then. Unlike these researchers, I believed that that academia and free software community agreed that knowledge should be free as in freedom, and I wanted to be a part of such a community.

Academic ideals

As a student, you are told continuously that academia is for idealists. People who put freedom before money. People who care about knowledge in and of itself and not how to sell it. And along with these ideas about academia, you are passed one more very important idea: the authority of academia. Whatever the issue, academia (not science, bear in mind) will provide a solution. Teaching? Academia knows how to do it best. Research? You bet. Sure, some professor here and other professor there might disagree on whatever topic, and one of them might be wrong. Regardless, the academia will resolve whatever conflict that arises and produce the right answer. Nothing else but the academia.

The idea, in essence, is that people outside of academia are just outsiders and their work is not relevant because it is not sanctioned by academics. They do not get the right to decide on relevant research. Their criticism of the work of someone from the academia does not matter.

Free software community ideals

Unlike academia, free software community is based on decentralization, lack of imposed hierarchy, individual creativity, and strong opposition to this idea of requiring some sanction from some arbitrary central authority. If you disagree, you are free to create software your way and invite others to do the same. There is no “officially right” and “officially wrong” way.

Patent pending open source code

“Some guy from the academia” in the case I mentioned above was Robert D. Cameron from Simon Fraser University, asking free software community to look at his code:

u8u16-0.9  is available as open source software under an OSL 3.0 license at http://u8u16.costar.sfu.ca/

Rich Felker was enthusiastic at first, but quickly saw the software in question was patent pending:

On second thought, I will not offer any further advice on this. The website refers to “patent-pending technology”. Software patents are fundamentally wrong and unless you withdraw this nonsense you are an enemy of Free Software, of programmers, and users in general, and deserve to be ostracized by the community. Even if you intend to license the patents obtained freely for use in Free Software, it’s still wrong to obtain them because it furthers a precedent that software patents are valid, particularly stupid patents like “applying vectorization in the obvious way to existing problem X”.

There were also doubts presented regarding relevance of this research at all, along with suggestions for better methods. While interesting, they are outside the scope of this blog post.

A patent is a state-granted monopoly designed to stimulate research, yet frequently used to stop competition and delay access to new knowledge. Both Mises Institute and Electronic Frontier Foundation have written many articles on patents which I highly recommend for more information. In addition, as an excellent overview of the issues regarding the patent system, I can recommend the Patent Absurdity: How software patents broke the system movie.

So, there was a guy from the idealistic academia, who from my perspective seemed to take the wrong stance. And there was a guy outside of the idealistic academia, and was seemingly taking the right stance. It made absolutely no sense at first that the academia was working against freedom and an outsider was standing for freedom. Then it finally hit me: the academia and the free software community do not hold the same ideals and do not pursue the same goals. And this was also the moment I chose my side: the free software community first and the academia second.

However, academics tend to be very creative in proving they care about freedom of knowledge. Section 9 of the paper (the only part of the paper I read) goes:

A Simon Fraser University spin-off company, International Characters, has been formed to commercialize the results of the ongoing parallel bit stream research using an open source model. Several patent applications have been filed on various aspects of parallel bit stream technology and inductive doubling architecture.

Whoa, open source and patents. What’s going on here?

However, any issued patents are being dedicated to free use in research, teaching, experimentation and open-source software. This includes commercial use of open-source software provided that the software is actually publically available. However, commercial licenses are required for proprietary software applications as well as combinations of hardware and software.

Were it not for the patents, but for the licenses, I would completely agree with this approach. “If you are open sourcing your stuff, you are free to use my open source stuff. If you are not open source, you are required to get a different license from me.” That is how copyleft licenses work.

The problem is, as Rich says above, every filling of a patent enforces the validity of the patent system itself. The patent in question is just a normal patent and this is precisely the problem. Furthermore:

From an industry perspective, the growth of software patents and open-source software are both undeniable phenomena. However, these industry trends are often seen to be in conflict, even though both are based in principle on disclosure and publication of technology details.

Unlike patents, free and open source software is based on the principle of free unrestricted usage, modification, and distribution. These industry trends are seen in conflict, and that is the right way to see them.

It is hoped that the patentleft model advanced by this commercialization effort will be seen as at least one constructive approach to resolving the conflict. A fine result would ultimately be legislative reform that publication of open source software is a form of knowledge dissemination that is considered fair use of any patented technology.

While it would certainly be nice if open source was protected from patent lawsuits, this tries to shift the focus from the real issue, which is the patent itself and restrictions it imposes.

Opening the patents

First possible solution is not to patent at all.

Second possible solution is to license the patent differently. Instead of being picky about the applications of the patent to decide whether royalties ought to be paid, which is the classical academic approach and also used above, one can simply license it royalty-free to everyone. This way, one prevents innovations from being patented and licensed in a classical way. This is what Tesla Motors does.

Third possible solution is to use the copyleft-style patent license, which allows royalty-free use of knowledge given that you license your developments under the same terms. The approach uses the existing patent system in a reverse way, just like the copyleft licenses use the copyright system in a reverse way. This can be seen as an evolution of what Open Invention Network and BiOS already do.

This approach still relies on giving validity to the patent system, but unlike the classical academic approach it also forces anyone to either go copyleft with their derivative patents or not use your technology. Effectively, this approach uses the patent system to expand the technology commons accessible to everyone, which is an interesting reverse of its originally intended usage.

I am still not buying the “new open source-friendly Microsoft” bullshit

Featured image: Georgi Petrov | Unsplash (photo)

This week Microsoft released Computational Network Toolkit (CNTK) on GitHub, after open sourcing Edge’s JavaScript engine last month and a whole bunch of projects before that.

Despite the fact that the open sourcing of a bunch of their software is a very nice move from Microsoft, I am still not convinced that they have changed to the core. I am sure there are parts of the company who believe that free and open source is the way to go, but it still looks like a change just on the periphery.

Really, all the projects they have open sourced so far are not the core of their business. Their latest version of Windows is no more friendly to alternative operating systems than any version of Windows before it, and one could argue it is even less friendly due to more Secure Boot restrictions. Using Office still basically requires you to use Microsoft’s formats, and in turn accept their vendor lock-in.

Put simply, I think all the projects Microsoft has opened up so far are a nice start, but they still have a long way to go to gain respect from the open source community. What follows are three steps Microsoft could take in that direction.

1. Fully support OpenDocument and make it the default format in Office applications

Microsoft has accepted the web standards defined by W3C. Making OpenDocument the default format in Office would be the equivalent of accepting independently standardized HTML and CSS. Even after accepting the format, Microsoft could still compete with free and open source office suites. They could offer more features, more beautiful user interface, better performance, or better quality support. They would, however, lose the vendor lock-in ability.

2. Open source large parts of Windows and the tools required to build custom versions

Apple has been open sourcing large parts of OS X (but not all of it, one should say) since the version 10.0. With significant effort, it is possible to build something like PureDarwin, an open source operating system based on the source released by Apple. Note that, for example, PureDarwin does not use OS X GUI, since Apple has not open sourced it.

Microsoft could do the same with Windows as Apple did with OS X: open source large parts of the code, and allow people to combine it with other software to build custom versions. Even if some parts of the code remain proprietary, it is still a big improvement over what Microsoft is doing now.

3. Spin off the department for Secure Boot bootloader signing into an independent non-profit entity

Since 2012, the machines with UEFI Secure Boot have started to appear on the market. To get your laptop or desktop PC certified for Windows 8, a manufacturer had to support Secure Boot, include Microsoft keys, turn Secure Boot on by default, and allow the user to turn it off. Microsoft agreed to sign binaries for vendors of other operating systems, and vendors like Fedora and Canonical got the signatures.

With Windows 10, the requirement to allow the user to turn Secure Boot off vanished, which prevents booting of unsigned operating systems. Furthermore, Microsoft can at any time revoke the key used for signing operating systems other than Windows and render all of them unbootable. Finally, since the key used to sign other operating systems is a separate key from the one used to sign Windows, the revoking would not affect Windows in any way.

The situation gives Microsoft an enormous amount of power and control over desktops and laptops. It would be much better if the signing process and management of keys was done by an independent non-profit entity, governed by a consortium of companies.

Summary

I am sure there are people, even among those who work for Microsoft right now, who would agree with these ideas. However, the support for these ideas itself does not matter much unless and until Microsoft starts taking action in that direction.

And, unless and until that happens, I am not buying the “new open source-friendly Microsoft” bullshit.

AMD and the open source community are writing history

Featured image: Maya Karmon | Unsplash (photo)

Over the last few years, AMD has slowly been walking the path towards having fully open source drivers on Linux. AMD did not walk alone, they got help from Red Hat, SUSE, and probably others. Phoronix also mentions PathScale, but I have been told on freenode channel #radeon this is not the case and found no trace of their involvement.

AMD finally publically unveiled the GPUOpen initiative on 15th of December 2015. The story was covered on AnandTech, Maximum PC, Ars Technica, Softpedia, and others. For the open source community that follows the development of Linux graphics and computing stack, this announcement comes as hardly surprising: Alex Deucher and Jammy Zhou presented plans regarding amdgpu on XDC2015 in September 2015. Regardless, public announcement in mainstream media proves that AMD is serious about GPUOpen.

I believe GPUOpen is the best chance we will get in this decade to open up the driver and software stacks in graphics and computing industry. I will outline the reasons for my optimism below. As for the history behind open source drivers for ATi/AMD GPUs, I suggest well written reminiscence on Phoronix.

Intel and NVIDIA

AMD’s only competitors are Intel and NVIDIA. More then a decade ago, these three had other other companies competing with them. However, all the companies that used to produce graphics processors either ceased to exist due to bankruptcy/acquisition or changed their focus to other markets.

Intel has very good open source drivers and this has been the case for almost a decade now. However, they only produce integrated GPU which are not very interesting for gaming and heterogeneous computing. Sadly, their open source support does not include Xeon Phi, which is a sort of interesting device for heterogeneous computing.

NVIDIA, on the other hand, has very good proprietary drivers, and this has been true for more then a decade. Aside from Linux, these drivers also support FreeBSD and Solaris (however, CUDA, the compute stack, is Linux-only).

To put it simply, if using a proprietary driver for graphics and computing is acceptable, NVIDIA simply does better job with proprietary drivers than AMD. You buy the hardware, you install the proprietary driver on Linux, and you play your games or run the computations. From a consumer’s perspective, this is how hardware should work: stable and on the release day. From the perspective of an activist fighting for software freedom, this is unacceptable.

Yes, if AMD tries to compete with proprietary drivers against NVIDIA’s proprietary drivers, NVIDA wins. When both companies do not really care about free and open source software, I (and probably others) will just pick the one that works better at this moment, and not think much about it.

To give a real-world example, back in 2012 we started a new course on GPU computing at University of Rijeka Department of Informatics. If AMD had the open source heterogeneous computing stack ready, we would gladly pick their technology, even if hardware had slightly lower performance (you do not really care for teaching anyway). However, since it came down to proprietary vs. proprietary, NVIDIA offered more stable and mature solution and we went with them.

Even with the arguments that NVIDIA is anti-competitive because G-Sync works only on their hardware, that AMD’s hardware is not so bad and you can still play games on it, and that if AMD crashes NVIDIA will have a monopoly, I personally could not care less. It is completely useless to buy AMD’s hardware just so that they don’t crash as a company; AMD is not a charity and I require value in return when I give money to them.

To summarize, AMD with (usually more buggy and less stable) proprietary drivers just did not really have an attractive value proposition.

GPUOpen changing the game

However, AMD having the open source driver as their main one gives a reason to ignore their slight disadvantage in terms of the performance per watt and the performance per dollar. Now that AMD is developing a part of the open source graphics ecosystem and improving it for them as well as the rest of the community, they are a very valuable graphics hardware vendor.

This change empowers the community to disagree with AMD about what should be developed first and take the lead. As a user, you can fix the bug that annoys you when you decide and do not need to wait for AMD to fix it when they care to do it. Even if you don’t have sufficient knowledge to do it yourself, you can pay someone to fix it for you. And this freedom is what is very valuable with open source driver.

Critics might say, this is easy to promise, AMD has said many things many times. And this is true; however, the commits by AMD developers in the Kernel, LLVM, and Mesa repositories shows that AMD is walking the walk. Doing a quick grep for e-mail addresses that contain amd.com shows a nice and steady increase in both the number of developers and the number of commits since 2011.

Critics might also say that AMD is just getting free work from the community and giving ‘nothing’ in return. Well, I wish more companies sourced free work from the community in this way and gave their code as free and open source software (the ‘nothing’). Specifically, I really wish NVIDIA follows AMD’s lead. Anyhow, this is precisely the way Netscape started what we know today as Firefox, and Sun Microsystems started what we know today as LibreOffice.

To summarize, AMD with open source drivers as the main lineup is very attractive. Free and open source software enthusiasts do not (nor they should) care if AMD is ‘slightly less evil’, ‘more pro-free market’, ‘cares more about the teddy bear from your childhood’ than NVIDIA (other types of activists might or might not care about some of these issues). For the open source community, including Linux users, AMD either has the open source drivers and works to improve open source graphics ecosystem or they do not. If AMD wants Linux users on their side, they have to remain committed to developing open source drivers. It’s that simple, open or irrelevant.

Non-free Radeon firmware

Free Software Foundation calls for reverse engineering of the Radeon firmware. While I do believe we should aim for the free firmware and hardware, I have a two problems with this. First, I disagree with a part of Stallman’s position (which is basically mirrored by FSF):

I wish ATI would free this microcode, or put it in ROM, so that we could endorse its products and stop preferring the products of a company that is no friend of ours.

I can not agree with the idea that the non-free firmware, when included in a ROM on the card, is somehow better than the same non-free firmware uploaded by the driver. The reasoning behind this argument makes exactly zero sense to me. Finally, the same reasoning has been applied elsewhere: in 2011 LWN covered the story of GTA04, which used the ‘include firmware in hardware’ trick to be compliant with FSF’s goals.

Second, AMD, for whatever reason, does not want to release firmware as free and open source, but their firmware is freely redistributable. (They have the freedom not to open it and even disagree with us that they should, of course.) While not ideal, for me this is a reasonable compromise that works in practice. I can install latest Fedora or Debian and small firmware blob is packaged with the distro, despite being non-free. It doesn’t depend on kernel version, it doesn’t even depend on whether I run Linux or FreeBSD kernel.

To summarize, I would like to see AMD release free firmware as much as anyone supporting FSF’s ideas. And I do not hide that from AMD nor do I think anyone else should. However, I do not consider this issue of non-free firmware to be anywhere as important as having a supported free and open source driver, which they finally have. Since NVIDIA is no better regarding free firmware, I do not believe that right now we have the leverage required to convince AMD to change their position.

AMD and the open source community

Just like Netscape and Sun Microsystems before them, AMD right now needs the community as much as the community needs them. I sincerely hope AMD is aware of this, and I know that the community is. Together, we have the chance of the decade to free another part of the industry that has been locked down with proprietary software dominating it for so long. Together, we have the chance to start a new chapter in graphics and computing history.