As a hobbyist Python programmer, over the years I have tried a variety of different editors. Back in the day I used to use Eclipse with the PyDev plugin. I then moved on to use GEdit with a few extensions switched on. After that I moved to Geany. I have to admit, much to the shock of some of you, I never really stuck with Sublime, despite a few attempts.
As some of you will know, this coming week I start at GitHub as Director of Community. Like many, when I was exploring GitHub as a potential next career step, I did some research into what the company has been focusing their efforts on. While I had heard of the Atom editor, I didn’t realize it came from GitHub. So, I thought I would give it a whirl.
Now, before I go on, I rather like Atom, and some of you may think that I am only saying this because of my new job at GitHub. I assure you that this is not the case. I almost certainly would have loved Atom if I had discovered it without the possibility of a role at GitHub, but you will have to take my word for that. Irrespective, you should try it yourself and make your own mind up.
Going into this I had a set of things I look for in an editor that tends to work well with my peanut-sized brain. These include:
- Support for multiple languages.
- A simple, uncluttered, editor with comprehensive key-bindings.
- Syntax highlighting and auto-completion for the things I care about (Python, JSON, HTML, CSS, etc).
- Support for multiple files, line numbers, and core search/replace.
- A class/function view for easily jumping around large source files.
- High performance in operation and reliable.
- Cross-platform (I use a mixture of Ubuntu and Mac OS X).
- Nice to have but not required: integrated terminal, version control tools.
Now, some of you will think that this mixture of ingredients sounds an awful lot like an IDE. This is a reasonable point, but what I wanted was a simple text editor, just with a certain set of key features…the ones above…built in. I wanted to avoid the IDE weight and clutter.
This is when I discovered Atom, and this is when it freaking rocked my world.
Atom is an open source cross-platform editor. There are builds available for Mac, Windows, and Linux. There is of course the source available too in GitHub. As a side point, and as an Ubuntu fan, I am hoping Atom is brought into Ubuntu Make and I am delighted to see didrocks is on it.
As a core editor it seems to deliver everything you might need. Auto-completion, multiple panes, line numbers, multiple file support, search/replace features etc. It has the uncluttered and simple user interface I have been looking for and it seems wicked fast.
Stock Atom also includes little niceties such as markdown preview, handy for editing
README.md files on GitHub:
So, in stock form it ticks off most of the requirements listed above.
A Hackable Editor
Consequently, basically everything in Atom can be customized. Now, there are core exposed customizations such as look and feel, keybindings, wrapping, invisibles, tabs/spaces etc, but then Atom provides an extensive level of customization via themes and packages. This means that if the requirements I identified above (or anything else) are not in the core of the editor, they can be switched on if there are suitable Atom packages available.
Now, for a long time text editors have been able to be tuned and tweaked like this, but Atom has taken it to a new level.
Firstly, the interface for discovering, installing, enabling, and updating plugins is incredibly simple. This is built right into Atom and there is thankfully over 3000 packages available for expanding Atom in different ways.
Thus, Atom at the core is a simple, uncluttered editor that provides the features the vast majority of programmers would want. If something is missing you can then invariably find a package or theme that implements it and if you can’t, Atom is extensively hackable to create that missing piece and share it with the world. This arguably provides the ability for Atom to satisfy pretty much about everyone while always retaining a core that is simple, sleek, and efficient.
To give you a sense of how I have expanded Atom, and some examples of how it can be used beyond the default core that is shipped, here are the packages I have installed.
Please note: many of the screenshots below are taken from the respective plugin pages, so the credit is owned by those pages.
Symbols Tree View
symbols-tree-view in the Atom package installer.
This package simply provides a symbols/class view on the right side of the editor. I find this invaluable for jumping around large source files.
merge-conflicts in the Atom package installer.
A comprehensive tool for unpicking merge conflicts that you may see when merging in pull requests or other branches. This makes handling these kinds of conflicts much easier.
pigments in the Atom package installer.
A neat little package for displaying color codes inline in your code. This makes it simple to get a sense of what color that random stream of characters actually relates to.
color-picker in the Atom package installer.
Another neat color-related package. Essentially, it makes picking a specific color as easy as navigating a color picker. Handy for when you need a slightly different shade of a color you already have.
terminal-plus in the Atom package installer.
An integrated terminal inside Atom. I have to admit, I don’t use this all the time (I often just use the system terminal), but this adds a nice level of completeness for those who may need it.
linter in the Atom package installer.
This is a powerful base Linter for ensuring you are writing, y’know, code that works. Apparently it has “cow powers” whatever that means.
As I said earlier, editor choice is a very personal thing. Some of you will be looking at this and won’t be convinced about Atom. That is totally cool. Live long and edit in whatever tool you prefer.
Speaking personally though, I love the simplicity, extensibility, and innovation that is going into Atom. It is an editor that lets me focus on writing code and doesn’t try to force me into a mindset that doesn’t feel natural. Give it a shot, you may quite like it.
Let me know what you think in the comments below!
Over the course of my career I have been fortune to meet some incredible people and learn some interesting things. These have been both dramatic new approaches to my work and small insights that provide a different lens to look at a problem through.
When I learn these new insights I like to share them. This is the way we push knowledge forward: we share, discuss, and remix it in different ways. I have benefited from the sharing of others, so I feel I should do the same.
Therein lies a dilemma though: what is the best medium for transmitting thoughts? Do we blog? Use social media? Podcasting? Video? Presentations? How do we best present content for (a) wider consumption, (b) effectively delivering the message, and (c) simple sharing?
Back of the Napkin
In exploring this I did a little back of the napkin research. I ask a range of people where they generally like to consume media and what kind of media formats they are most likely to actually use.
The response was fairly consistent. Most of us seem to discover material on social media these days and while video is considered an enjoyable experience if done well, most people tend to consume content by reading. There were various reasons shared for this:
- It is quicker to read a blog post than watch a video.
- I can’t watch video at work, on my commute, etc.
- It is easier to recap key points in an article.
- I can’t share salient points in a video very easily.
While I was initially attracted to the notion of sharing some of these thoughts in an audio format, I have decided to focus instead more on writing. This was partially informed by my back of the napkin research, but also in thinking about how we best present thoughts.
Doing Your Thinking
I recently read online (my apologies, I forget the source) an argument that social media is making us lazy: essentially, that we tend to blast out thoughts on Twitter as it is quick and easy, as opposed to sitting down and presenting a cogent articulation of a position or idea.
This resonated with me. Yesterday at a conference, Jeff Atwood shared an interesting point:
“The best way to learn is to teach.”
This is a subtle but important point. The articulation and presentation of information is not just important for the reader, but for the author as well.
While I want to share the things I have learned, I also (rather selfishly) want to get better at those things and how I articulate and evolve those ideas in the future.
As such, it became clear that blogging is the best solution for me. It provides the best user interface for me to articulate and structure my thoughts (a text editor), it is easily consumable, easily shareable, and easily searchable on Google.
So, regular readers may notice that jonobacon.org has been spruced up a little. Specifically, my blog has been tuned quite a bit to be more readable, easier to participate in, and easier to share the content with.
I am not finished with the changes, but my goal is to regularly write and share content that may be useful for my readers. You can keep up to date with new articles by following me on either Twitter, Facebook, or Google+. As is with life, the cadence of this will vary, but I hope you will hop into the articles and share your thoughts and join the conversation.
Earlier this week I did a keynote at All Things Open. While the topic covered the opportunity of us building effective community collaboration and speeding up the development of Open Source and innovation, I also touched on some of the challenges.
One of these challenges is sustainability. There are too many great Open Source projects out there that are dead.
My view, although some may consider it rather romantic, is that there is a good maintainer out there for the vast majority of these projects, but the project and the new maintainer just haven’t met yet. So, this got me thinking…I wonder if this theory is actually true, and if it is, how do we connect these people and projects together?
While on the flight home I started thinking of what this could look like. I then had an idea of how this could work and I have written a little code to play with it. This is almost certainly the wrong solution to this problem, but I figured it could be an interesting start to a wider discussion for how we solve the issue of dead projects.
The basic crux of my idea is that we provide a simple way for projects to indicate that a project needs a new maintainer. The easiest way to do this is to add a file into the source tree of the project.
This file is an
.adopt file which basically includes some details about the project and indicates if it is currently maintained:
[Project] maintained = no name = Jokosher description = An audio multitracker built for the GNOME desktop. category = Audio repo = http://www.github.com/the-project discussion = http://mymailinglist.com/jokosher languages = Python [Contact] name = Bob Smith email = email@example.com
Now, this is a crude method of specifying the main bits of a project and much of this format will need tuning (e.g. we could pull out languages and frameworks out into a new block). You get the drift though: this is metadata about a project that also indicates (a) whether it is maintained, (b) what the key resources are for someone to get a good feel for the project, and (c) who the contact would be to help a new potential maintainer come in.
With this file available in the source tree, it should be publically available (e.g. the raw file on GitHub). A link to this file would then be pasted into a web service that adds it to a queue.
This queue is essentially a big list of
.adopt files from around the web. A script then inspects each of these
.adopt files and parses the data out into a database.
This database is then used to make this list of unmaintained projects searchable in some way. For example, you could search by category or programming languages. While
maintained continues to be set to
no the project will remain on the list.
When a suitable maintainer steps up and the project is alive again, all the maintainer needs to do is set this
maintained line to
yes. On the next scan of the queue, that particular
.adopt file will be identified as now maintained and it will be removed, thus not appearing in the database.
A First Step
To provide a sense of how this could work I threw some Python together at https://github.com/jonobacon/adopt-a-project.
It is built using CherryPy to keep it simple. I wanted to avoid a full-fledged Django-type framework until the core premise of how this works is fleshed out. A caveat here: this is a really quick, thrown-together prototype designed to encourage some discussion and ideation.
It works like this:
website.pyto spin up a local webserver on
127.0.0.1:8080that will display the empty queue. You can then add some remotely or locally hosted
.adoptfiles by clicking the button at the top of the page. I have included three examples on GitHub 1 2 3. These are added to
- You then run the
adopt-queue.pythat will scan the queue and create a sqllite3 database with the data.
- The website then includes a really simple and crude list of projects and the links to the relevant resources (e.g. code, discussion).
Now, as you can tell, I have only spent a few hours knocking this together and there are many things missing. For example:
- It doesn’t include the ability to search for projects or search by language.
- The schema is a first cut and needs a lot of care and attention.
- The UI is very simplistic.
- There is barely any error-checking.
Topics For Discussion
So, this is a start. I think there are a lot of interesting topics for discussion here though:
- Is this core concept a good idea? There is a reasonable likelihood it isn’t, but that is the goal of all of this…let’s discuss it. 🙂
- If it is the core of a good idea, how can the overall approach be improved and refined?
- What kind of fields should be in an
.adoptfile? How do we include the most important pieces of information but also keep this a low barrier for entry for projects.
- What should be the hand-off to encourage someone to explore and ultimately maintain a project? A list of dead projects is one thing but there could be instructions, guides, and other material to help people get a sense of how they maintain a project.
- Maintaining a project is a great way for students to build strong skills and develop a resume – could this be a carrot and stick for encouraging people to revive dead projects?
- What kind of metrics would need to be tracked in this work?
To keep things simple and consistent I would like to encourage this discussion over on the project’s issue tracker. Share your comments, thoughts, and methods of improvement there.
Tomorrow morning at the ungodly hour of 6am I board a slight to Raleigh for the All Things Open conference. The conference starts on Monday, but I am flying out earlier for a bunch of meetings with folks from opensource.com.
This is my first time at All Things Open but it looks like they have a stunning line up of speakers and some great folks attending.
I just wanted to share some of the things I will be doing there:
- Tues 20th Oct at 9.05am – Keynote – I will be delivering a keynote about the opportunity for Open Source and effective collaboration and community leadership to solve problems and innovate. The presentation will delve into the evolution of technology, where Open Source plays a role, the challenges we need to solve, and the opportunity everyone in the room can participate in.
- Tues 20th Oct at 12.15pm – Lightning Talk – I will be giving one of the lightning talks. It will an introduction to the fascinating science of behavioral economics and how it can provide a scaffolding for building effective teams and communities.
- Tues 20th Oct at at 2.15pm – Presentation – I will delivering a presentation called A Crash Course in Bacon Flavored Community Management. In it I will be discussing the key components of building strong and empowered communities, how we execute in those elements, how we manage politics and conflict, and tracking success and growth.
- Tues 20th Oct at at 3.00pm – Book Signing – I will be signing free copies of The Art of Community at the opensource.com booth (booth #17). Come and say hi, get a free book, and have a natter. Books are limited, so get there early.
As ever, if you would like to have a meeting with me, drop me an email to
firstname.lastname@example.org and we can coordinate a time.
I hope to see you there!
Some time ago I was introduced to Peter Diamandis, co-founder of XPRIZE, Human Longevity Inc, Planetary Resources, and some other organizations. We hit it off and he invited me to come and build community at the XPRIZE Foundation. His vision and mine were aligned: to provide a way in which anyone with passion and talent can play a role in XPRIZE’s ambitious mission to build a brighter future.
The ride at XPRIZE has been thrilling. When I started we really had no community outside of some fans on Twitter and Facebook. Today we have a community website, forum, wiki, documentation, and other infrastructure. We created the XPRIZE Think Tanks programme of community-driven local groups and now have groups across the United States, India, Asia, Europe, South America, Australia, and beyond. We have a passionate collaborative community working together to explore how they can innovate to solve major problems that face humanity.
Some of our earliest community members, at a sprint at the XPRIZE office
I am proud of my work at XPRIZE but even prouder of the tremendous work in the community. I am also proud of my colleagues at the foundation who were open to this new concept of community percolating into everything we do.
Although my experience at XPRIZE has been wonderful, I have missed the technology and Open Source world.
Something Jim Whitehurst, CEO of Red Hat said to me a while back was that coming from Delta to Red Hat, and thus outside of Open Source into Open Source, helped him to realize how special the Open Source culture and mindset is.
Likewise, while I never left Open Source, moving to XPRIZE was stepping back from the flame somewhat, and it helped me to see the kindness, creativity, agility, and energy that so many of us in the Open Source world take for granted.
As such, despite the rewarding nature of my work at XPRIZE, I decided that I wanted to get back closer to technology. There was a caveat though: I still wanted to be able to play a role in furthering the efficacy and impact of how we as human beings collaborate and build communities to do incredible things.
A New Journey
With this in mind, I am delighted to share that on the 14th November 2015 I will be joining GitHub as Director of Community.
GitHub are a remarkable organization. In recent years they have captured the mindshare of developers and provided the go-to place where people can create, share, and collaborate around code and other content. GitHub is doing great today but I think there is huge potential for what it could be in the future for building powerful, effective communities
My role will be to lead GitHub’s community development initiatives, best practice, product development, and engagement.
My work will be an interesting mix of community engagement, policy, awareness, and developer relations, but also product management to enhance GitHub for the needs of existing and future communities.
I am also going to work to continue to ensure that GitHub is a safe, collaborative, and inclusive environment. I want everyone to have the opportunity to enjoy GitHub and be the best they can be, either within the communities they are part of on GitHub, or as part of the wider GitHub userbase.
Over the next few weeks I will be taking care of the handoff of my responsibilities at XPRIZE and my last day will be on Fri 30th Oct 2015. I will then be flying to Bangalore in India to keynote the Joomla World Conference, taking a little time off, and then starting my new position at GitHub on the 17th November 2015.
Across the course of my career I have given, and continue to give, a lot of presentations at conferences all over the world. In the vast majority of them I have used LibreOffice because I like and support the project and I like my presentations being in an open format that can be used across different Operating Systems.
At times I have also used Keynote and Powerpoint and there are a few small things that LibreOffice is missing to be the perfect presentation tool. I thought I would share these here with a hope that these features will be built and thus turn LibreOffice Impress into the most perfect presentation tool on the planet. Naturally, if these features do get built, I will write a follow up post lavishing praise on the LibreOffice team. If anyone from the LibreOffice team wants to focus on these I am more than happy to provide feedback and input!
One the most fantastic elements of both Keynote and Powerpoint are the smart guides. These are guidelines that appear when you move an object around to help you to align things (such as centering an object or making sure multiple objects are the same width/height from each other).
This feature is invaluable and the absence of it in Impress is notable and at times frustrating. I think a lot of people would move over to LibreOffice if this was available and switched on by default.
Moving objects is slow and clunky in LibreOffice. Moving an object doesn’t smoothly move pixel by pixel but instead jerkily moves as I drag my mouse. It seems that the object moves in 5/10 pixel increments. This means positioning objects is less precise and feels slow and clunky.
Likewise, selections (e.g. selecting multiple objects) and reordering slides has the same chunkiness.
If this was refined it would make the whole app feel far more pleasurable to use.
There has been times when giving a presentation when I have wanted to embed a window in a presentation to save me breaking out of a presentation to show the audience something. Breaking out of a presentation ruins the magic…we want to stay in full presentation mode where possible!
As an example, I might want to show the audience a web page. I would like to therefore embed Chrome/Firefox into my presentation.
I might also want to show a feature using a command line tool. I would like to embed the terminal into my presentation, potentially on the left side of the slide with some content to the right of it. This would be invaluable for teaching programming for example. I might also want to embed a text editor.
Importantly, embedded windows would preferably have no window borders and an option to remove the menu so it looks fully integrated. This would be a tremendous feature that neither Keynote or Powerpoint have.
Nested Section Slides
Many presentations have multiple sections. If you have a lot of slides like I do it can be handy to be able to break slides in sections (with the appropriate slides nested under a main slide for each section). This is a standard feature in Keynote. This makes it easy to jump to different sections when editing. What would be really ideal is if there is also a hotkey that can jump between the different sections – this provides a great opportunity then to jump between different logical pieces of a presentation.
When putting together a deck for Bad Voltage Live I wanted to play a slide with an embedded audio clip in it and configure what happens before or after the audio plays. For example, I would like the audio to play and then automatically transition to the next slide when the audio is finished. Or, I want to load a slide with an embedded audio clip and then require another click to start playing the audio. From what I can tell, these features are missing in LibreOffice.
Those are the main things for me. So, LibreOffice community, think you can get these integrated into the app? Kudos can be yours!
Tonight, Wed 30th September 2015 at 7pm there are five important reasons why you should be in Fulda in Germany:
- A live Bad Voltage show that will feature technology discussion, competitions, and plenty of fun.
- Free beer.
- The chance to win an awesome Samsung Galaxy Tab S2.
- Free entry (including the beer!).
- A chance to meet some awesome people.
It is going to be a blast and we hope you can make it out here tonight.
Just remember, you might leave with one of these:
Doors open tonight at 7pm, show starts at 7.30pm at:
University of Applied Science Fulda,
Leipziger Str. 123, 36037
We hope to see you there!
Some of you may know that I do a podcast called Bad Voltage with some friends; Stuart Langridge, Bryan Lunduke, and Jeremy Garcia.
The show covers Open Source, technology, politics, and more, and features interviews, reviews, and plenty of loose, fun, and at times argumentative discussion.
On Wed 30th Sep 2015, the Bad Voltage team will be doing a live show as part of the OpenNMS Users Conference. The show will be packed with discussion, surprises, contests, and give-aways.
The show takes place at the University Of Applied Sciences in Fulda, Germany. The address:
University of Applied Science Fulda, Leipziger Str. 123, 36037 Fulda, Germany Tel: +49 661 96400
For travel details of how to get there see this page.
Everyone is welcome to join and you don’t have to be joining the OpenNMS Users Conference to see the live Bad Voltage show. There will be a bunch of Ubuntu folks, SuSE folks, Linux folks, and more joining us. Also, after the show we plan on keeping the party going – it is going to be a huge amount of fun.
To watch the show, we have a small registration fee of €5. You can register here. While this is a nominal fee, we will also have some free beer and giveaways, so you will get your five euros worth.
So, be sure to come on join us. You can watch a fun show and meet some great people.
REGISTER FOR THE SHOW NOW; space is limited, so register ASAP.
Disclaimer: I am not a member of the Mycroft team, but I think this is neat and an important example of open innovation that needs support.
Mycroft is an Open Source, Open Hardware, Open APIs product that you talk to and it provides information and services. It is a wonderful example of open innovation at work.
They are running a kickstarter campaign that is pretty close to the goal, but it needs further backers to nail it.
I recorded a short video about why I think this is important. You can watch it here.
I encourage you to go and back the campaign. This kind of open innovation across technology, software, hardware, and APIs is how we make the world a better and more hackable place.
Recently there has been a flurry of concerns relating to the IP policy at Canonical. I have not wanted to throw my hat into the ring, but I figured I would share a few simple thoughts.
Firstly, the caveat. I am not a lawyer. Far from it. So, take all of this with a pinch of salt.
The core issue here seems to be whether the act of compiling binaries provides copyright over those binaries. Some believe it does, some believe it doesn’t. My opinion: I just don’t know.
The issue here though is with intent.
In Canonical’s defense, and specifically Mark Shuttleworth’s defense, they set out with a promise at the inception of the Ubuntu project that Ubuntu will always be free. The promise was that there would not be a hampered community edition and full-flavor enterprise edition. There will be one Ubuntu, available freely to all.
Canonical, and Mark Shuttleworth as a primary investor, have stuck to their word. They have not gone down the road of the community and enterprise editions, of per-seat licensing, or some other compromise in software freedom. Canonical has entered multiple markets where having separate enterprise and community editions could have made life easier from a business perspective, but they haven’t. I think we sometimes forget this.
Now, from a revenue side, this has caused challenges. Canonical has invested a lot of money in engineering/design/marketing and some companies have used Ubuntu without contributing even nominally to it’s development. Thus, Canonical has at times struggled to find the right balance between a free product for the Open Source community and revenue. We have seen efforts such as training services, Ubuntu One etc, some of which have failed, some have succeeded.
Again though, Canonical has made their own life more complex with this commitment to freedom. When I was at Canonical I saw Mark very specifically reject notions of compromising on these ethics.
Now, I get the notional concept of this IP issue from Canonical’s perspective. Canonical invests in staff and infrastructure to build binaries that are part of a free platform and that other free platforms can use. If someone else takes those binaries and builds a commercial product from them, I can understand Canonical being a bit miffed about that and asking the company to pay it forward and cover some of the costs.
But here is the rub. While I understand this, it goes against the grain of the Free Software movement and the culture of Open Source collaboration.
Putting the legal question of copyrightable binaries aside for one second, the current Canonical IP policy is just culturally awkward. I think most of us expect that Free Software code will result in Free Software binaries and to make claim that those binaries are limited or restricted in some way seems unusual and the antithesis of the wider movement. It feels frankly like an attempt to find a loophole in a collaborative culture where the connective tissue is freedom.
Thus, I see this whole thing from both angles. Firstly, Canonical is trying to find the right balance of revenue and software freedom, but I also sympathize with the critics that this IP approach feels like a pretty weak way to accomplish that balance.
So, I ask my humble readers this question: if Canonical reverts this IP policy and binaries are free to all, what do you feel is the best way for Canonical to derive revenue from their products and services while also committing to software freedom? Thoughts and ideas welcome!