Organic interface design for GNOME

Interface design is a complex business. There are a great many schools of thought about how to build an effective interface, and ultimately no-one is 100% correct. Lots of theory, lots of academia, lots of opinion, but little hard evidence about what design constructs actually work best for general human-computer interaction.

Recently I kicked off a segment on everyone’s-favorite-un-PC-ramblefest, LUGRadio, in which I expressed concerns that the GNOME project is not deciding on a direction for a next-gen incarnation of the environment, and KDE4 is primed to swoop in and eat its lunch. I am pleased to see the segment kicked off some discussion, and the issue has been raised in the minds of some core GNOME contributors.

While at GUADEC 2006 I sat on the patio of our wooden shack with Mirco Muller at about 3am and we spent quite some time discussing concepts about what a next-gen GNOME could look like. For a while I had been mulling over different concepts and ideas about how GNOME should work, and trying to distill them into core interactions for a desktop. In my mind, before you even think about mocking up a a user interface design, you need to define the modes of interaction; they are like deciding which tools and ingredients you are going to need to bake a cake. If you don’t decide on the tools and ingredients, you cannot effectively move onto the design stage and then the implementation.

The problem with current desktops is that they are largely artificial. We have created modes of interaction that the user has to learn to understand the computer, instead of the computer trying to understand the user. We have to learn where things live, how to move things around, which things can be clicked on and which can’t, how sensitivity and insensitivity works and other false economies. Fundamentally we the users have to fit in with what the computer wants us to do.

The next-gen GNOME needs to change this. It really, really does. What I want to see is an organic environment; one that is designed around human interactions, tasks and concepts that we find natural, intuitive and repeatable. Do you ever have those experiences where you think “it would make sense if it worked this way, I wonder if it does” and to your surprise it does? We need to fill our desktop with these experiences. To do this, we need to understand what interactions and concepts are natural to us as humans, and work on these concepts in GNOME.

So, with time not my friend right now, here is a rough list of some organic concepts that I think we need to bear in mind in our thinking:

  • Pile Theory – nope, nothing to do with a nasty dose of the bum grapes, but the idea that we all naturally collect and stack things together into piles. I think this is a fundamental concept in a desktop – collections of things. Think of archives, directories, photo sets, collections of songs, related videos – they are groups of things that we need to access both as a group and as the individual items in that group. You can see this theory in action, look at many people’s desktops and the groups of icons of related bits and pieces – we need to make it easy to great this piles. Imagine a 3D interface to this piles where a bunch of items pile on top of each other and you can explode the pile or fit it together and re-organise it in different ways.
  • A Physical Environment – I want to pick up documents that I am editing, spin them round and scribble notes on them, I want them to look like they are shredded when I delete them, I want to stick related things together like lego – I want a physicality to the things that happen on my desktop. A great first step with this was when Compiz put virtual desktops on a cube – it made the concept of multiple desktops more tangible. We need to apply this kind of physicality to all aspects of the desktop.
  • Contextual Tools – something I have banged on about with Jokosher. You should only ever see tool options appear when it makes sense and when you can actually use those tools – insensitive greyed out tool options are nothing more than a distraction and a waste of space. In Jokosher, when you make a selection, the tools that can be used on that selection appear, we need to apply this concept to the entire desktop. This makes the desktop feel more organic in itself as the tools will only ever appear applicable to your context. It also makes the desktop far less cluttered and gets away from the nightmare of modal tools. We particularly want to get away from the hundreds of toolbar options available that clutter our applications. For all people have heralded the Ribbon as a great idea in Microsoft Office, I am pretty convinced that it may over-egg the pudding and confuse people with so many functional options available. We fundamentally need our desktop to be contextual – more on this later.
  • Two Handed Interaction – some of the work with multiple mouse pointers makes this possible. For some applications this makes perfect sense. Think of a 3D modeller such as Blender – the most natural modelling process is sculpting using your hands, and this requires two hands. Think about putting things in other things – it makes sense to hold the container open to put the things in it (like when you put items in a carrier bag). Naturally there is a hardware implication for this which will delay its adoption.
  • Real Contextual Working – a while back I wrote up my thoughts for a project desktop. We need our applications to be aware of what the user wants to do and ensure they are organic enough to evolve into a form that is condusive to that task.

With the growth in 3D technology, we have the opportunity to make all of this happen. This is just a collection of rough notes, but at GUADEC I hope to flesh some of these ideas out with other people. We need to break down the barriers to interaction, but also be brave enough to stand up and set a direction, which was the point of the segment. If I had my own way, I would love to blow off a week and spend all week designing a bunch of mock-ups. I have a fairly clear idea in my head how this kind of stuff would work, inspired from various interfaces and concepts, but I just don’t have the time to mock it up.

  • http://phoenix.student.utwente.nl/~klaveren Theo

    I find your comments interesting, I have however one remark to make. Your point that stuff on computers should work like they do in real life has to be taken with great caution, because it has been shown that things such as that may really suck. Take the old Quicktime interface for example, it had things like volume wheels and so on which turned out to totally suck on the pc screen. It may just so happen that just because I make huge piles of stuff on my desk in real life, I may not want to organize my stuff on the computer in that way. (In fact, I’m pretty sure I don’t. Have you ever seen my desk? :))

    Not that I disagree with your ideas, just something to think about.

  • http://gordallott.wordpress.com gord

    just to note on the 2 handed thing, a mouse is not a great idea for this kind of work but there are lots of handy devices that let you work in a more real sense, such as http://www.linux.com/article.pl?sid=07/03/19/162257

    in the blender example though i hear the best way of working is with 3d glasses with the graphics rendered in a stereographic mode and a nice stylus.

    i’m not sure that computer interfaces for the most part need to work in this kind of hands on way, but rather they need to be able to handle this hands on way if the user chooses that way (for example, a nice intergration with touch monitors)

  • http://cinemasie.com François

    I totally agree with the main idea and most examples. That’s why I’m defending projects like Compiz/Beryl. It’s not just about eye candy and useless effects. It makes the desktop feels more real. And if it feels real, you are more comfortable with it. You don’t need to learn, you just do what seems normal. Like to close an application, I proposed to measure how fast you move the windows. This way, if you “throw” the window away, it closes it. It just means “I don’t want to use this anymore”.

    Hundreds of ideas like those are needing to change the way interaction is designed. It must be more user oriented than files/softwares/hardwares… KDE is doing an interesting job on this with KDE4, Gnome should really NOT miss this train. There are already some interesting softwares sharing this philosophy (like Gimme, not yet polished, but that’s the good philosophy I think), but it MUST become the global policy. Jono, I hope you can use your position to give this a big buzz.

  • Ferdinand

    Would it be possible to make it easier to design interfaces? This way more people could try their hands on the design of an interface. Like with the Firefox extensions you would begin to see patterns in what works and what doesn’t. You see this with webpages, why not with Gnome?

  • Hmmm

    Pile theory: I’ve been using this for years, except I just call it “putting stuff in a folder”. And instead of “exploding the pile” I simply call it “opening the folder”. Its a lot less exciting, but sounds like it does exactly the same and doesn’t require any fancy 3d desktop whizbang

  • Ian Stoffberg

    http://www.bumptop.com/

    They have a nice video.

    Desktop using piles of documents to interact with. Stacking and grouping of docs which simulates the clutter of real world objects. Pretty neat concept.

    To give credit where due, I found this on moosy.blogspot.com

  • jono’s #2 fan

    Please just stick to ripping off garage band.

  • http://ywwg.com/wordpress Owen Williams

    I think there are three things that need to change for GNOME to support all of these cool ideas. The good news is we already have one of them:

    1. Real Compositing. Luckily, we have this now.
    2. the API stack needs to support the idea of motion and animation natively. There’s going to be a lot more bouncing and springing and spinning in future desktops, and we will need good support for animated widgets.
    3. Multi-touch is the wave of the future, and the stack will need to support the idea of multiple things happening at once.

    Once GNOME gets all three of those things, the rest of the future desktop can be written. We can experiment with piles, we can play around with bouncy interfaces (see the iPhone demo), we can try out gestural ideas. But we need these features in order to do any of it well.

  • Leen Toelen

    Hi,

    I am doing a lot of development in eclipse, and they use a lot of global action listeners. All components can subscribe to a global selection listener, and act accordingly. They also have a great plugin mechanism, where plugins can add actions in right-click menus, toolbars etc. They have the advantage that they only need to support Java, making in work cross-language may be a bit more difficult.

    regards, Leen

  • mlhudson

    Jono, THANK YOU for your thoughtfulness about this subject. I am a total NEWBIE to ubuntu and the gnome experience. Your ideas sound as though they truly could push the gnome desktop beyond eye candy (one of the over-focus areas of vista) into beautiful usability. That’s what a desktop’s main task is: USABILITY in a way that is enjoyable. That is the “market” for desktops. That’s what MS has always struggled with. Apple (I’m a long time user) gets the attractive part, and is getting better to use, but I would LOVE to see gnome be the BLEEDING EDGE in this!

    Kudos & Blessings! Matt

  • http://cinemasie.com François

    And I would not use “organic”, as to me it’s related to living organisms. I know the idea is “human beings oriented”, but here we talk more about a realistic, or even “physical” approach of things. The pile idea is not really “organic” but more “physical”. But I know that’s maybe a bit early to be picky on words… :mrgreen:

  • http://xubuntublog.wordpress.com Vincent

    Actually, 3D-Desktop was first with the rotating cube ;)

    http://desk3d.sourceforge.net/

  • C’est moi

    I do find greyed out tool options useful. Making tools too contextual can make it much harder for a user to learn how to use a program. For example: a hypothetical photo management app, which only shows you that it can do slide shows if you first select the photos to show. Result: frustrated user saying “I swear it had a slide show button the last time I looked, but now it’s just gone!” :mad:

  • Meneer R

    SEMANTIC INTERFACES

    Well, althouhg you can invent and agree on some visual metephore, it will inherently stay a metaphor. Secondly, its means, that all programs need to be rewritten whenever you change or upgrade this metaphor.

    There is much bigger architectual problem with gnome. The HIG and its associated metaphors are guidelines. Programs are expected to translate the semantics of their programs-use-cases to actual code and implement (and repeat) the HIG there.

    Why not switch from defining the ‘syntax’ of the interface to defining the ‘semantics’ of the interface. I’m talking about a lib-gnome-hig. A new gnome library that you give an abstract definition of the possible interactions, which would then be translated to an interface that will follow HIG rules.

    This will make a number of things possible: 1) we can experiment with different types of interfaces just by changing this one library. 2) we can have specialized interfaces for blind people, motorically disabled peolpe and hacker’s that log in their machine using ssh. (why shouldn’t they all be able to use Jokosher?) 3) these apps can also be made to integrate within other desktops with other guidelines (and perhaps other graphical toolkits).

    Gnome isn’t about GTK. It’s about HIG. Yet we can’t update HIG without rewriting all applications, nor are applications at this point perfectly consistent.

    Ah, well, just my two cents.

  • http://polynox.info alex

    Greyed out options are useful in one way: You see that they are there. If you edit something and you see a greyed out menu entry. You know that you can apply it on the object type in general. (not in this special case but in ohters) If you don’t display the options that are available on a certain type of thing the user simply thinks that they don’t exist.

    I’ve got another point for you: We need a shift from a program centric to document centric gui. The user doesn’t want to use firefox or epiphany, he wants to surf the web. He doesn’t want to use openoffice writer and gimp , he wants to create a document with text and pictures. We need, and this is a gigantous step, functionality modules instead of applications. You may think thats about the same, but there are a few differences. Functionality modules work in a common environment. Which means they all folloow the same GUI guidelines, they all have there options (if you want to keep ‘global’ option menus) in one place (not X File menus), they all work on a single document at the same time (that should be an container format where you can add different mediaparts), and they preferably blend into the desktop.

    like: you click on a file object, it grows big, widget like, media specific (to manipulate the media) and a document specific (to print, add other media, upload, send, etc.) windows come up, you start editing, you drag and drop another document in, you get options to merge the two, include only certain things, position them in place and time (if you’re creating sth animated)

  • Robert Devi

    As others have stated, the real world is a poor analogy for many computer processes, simply because in the real world, simply because they are the tradeoffs needed to function in the physical world. Anyone who has had to automate a complex paper business process knows exactly how non-optimal and error prone these tradeoffs can be.

    For instance, in the real world, only one person can view a file and if a file is in one folder it cannot be in another and it’s very hard to know what the current version of a document is. This need not be the case with computers and modelling the computer world this way would make computers less useful. In the real world, more than one person can view a document (although only one person can typically write), and symbolic or hard links allow a document to exist in more than one place. In the real world, when you write a document with formalus in them, they don’t automatically calculate the results and it’s not possible to like the results of one formula onto another (hello MathCad).

    If you want to make computers more useful you have to make things less like the real world. For instance, why can only one person edit a document? Why must the process of sharing a file between folders be so cumbersome as using symbolic links — couldn’t I just create a dynamic folder that group items with the same tags and tag items as belonging to more than one dynamic folder by simply dropping them in? Why can’t I automically keep all versions of a document (hello VMS), see only the current one, and have the option of getting rid of old revisions if the disk gets too full (i.e. full bzr/svn/cvs integration)? It wouldn’t have to be a global thing. It should be possible to right click on a folder and have it store all revisions.

    Relating to the piles, as stated they already exist in the computer world as folders and they’re more efficient. That being said, there’s no reason folders couldn’t be improved. Imagine being able to right click on a folder on the desk top or nautilus and have it expose it’s contents as a screen area. It would be possible to see the contents without opening it and allow you to place different folder blocks in different parts of your screen (e.g. top right has all my project documents, bottom left has all my personal stuff, bottom right has gadgets like weather, etc, top left has “current working area”. People currently put a lot on their desktops instead of folders simply because they want it accessible but get lost when their desktops get too full. This feature allows people to have the best of both worlds. It should also be possible to do in Nautlius via a “view as expanded” mode which expands top level folders in much the same way as the new GNOME control center does (see http://blog.vojta.name/images/sled-application-browser.png ).

    One more area which GNOME should expand into is the whole “common blackboard collaboration” metaphor. Yes we can video conference and IRC, but if I want to draw something or write something or show a picture or a web site to a group of people outside my computer, it’s very cumbersome and there’s no easy way of doing it without using X-tricks like having all people log into a common machine and share the mouse. The old Microsoft Netmeetings had some of this, but AFAIK, it doesn’t exist anymore without fees and it’s not crossplatform. There’s no reason why either needs to be the case.

    BTW, I’m in favour of the greying out menu items as opposed to hiding them. It’s important for muscle memory and helps you find things (things don’t just disappear on you without you knowing why). The problem with hiding things is that if you’re looking for a menu item and you can’t find it, you don’t know if it’s because you can’t do it or you can’t find it and should keep looking. That being said, greying out could be improved. If something is greyed out, it would be nice to move my mouse cursor over a question mark beside the item and get an explanation of why I can’t access that functionality (e.g. if “Save” is greyed out, it could tell me that the file is read only, click here to change permissions or sudo to a permission that could).

  • http://ghaefb.rebootcomic.com/2007/03/28/organic-interface-design-ideas-for-gnome/ ghaefb’s blog » Blog Archive » Organic interface design ideas for GNOME
  • dave

    I was intrigued to read this post after seeing the title, but I somewhat sadly have to admit to disagreeing 100% (or maybe 180°) with almost every point.

    Others have pointed out that piles are mostly just folders, that grayed out options are a feature, not a bug (particularly if they give feedback on why they are grey) and that real-life interaction isn’t always the best metaphor since reality is often limited in unfortunate ways (e.g. real items can only be in one place/folder/pile at a time).

    And I personally believe that 3D in UI is the single biggest bit of unsubstatiated hype of the last decade. Four desktops arranged at the corners of a 2D square are infinitely more understandable and usable than the four out of six sides of a cube.

    A trollish commenter above advises sticking to copying Apple. I would also recommend this, though more seriously. I’m regularly amazed by the restraint that Apple shows with UI evolution and accusations of useless eye-candy usually miss important UI elements such as the fact that the genie effect shows users exactly where their window has gon. as a response to UI studies that revealed that people who didn’t understand taskbars kept ‘losing’ and reopening windows. Interestingly they seem to be using lots of “3D” technology to build interfaces that are effectively 2D (e.g. expose). It’s clear tightly run corporations with egotistical maniacs at the top have an easier time saying no to things than open communities, but someone should be trying.

    Having said that, surely it’s only a matter of time before someone comes along, takes all the blingy technology and removes everything but the essentials and a subtly professional polish. However, I get the feeling they’d get there faster if they just blatantly stole from Apple then added some added value on top.

  • dave

    Oh, and here’s a great article about what makes things ‘intuitive’ which I point out whenever I see someone use that word.

    http://www.uie.com/articles/design_intuitive/

    What I don’t think he points out strongly enough is that ‘intuitive’ is almost entirely subjective. What’s intuitive to a long time Mac user isn’t automatically intuitive to a Windows Power user or a Myspace using teen therefore you really have to define the target group, and test on them. And once you do that the word ‘intuitive’ itself becomes pretty much redundant and more likely a barrier, than an aide, to communication since most people think that ‘intuitiveness’ lies only in the tool, not in the relationship between the user and the tool.

  • http://www.scottainsliesutton.net/ Scott

    I found your latest entry very interesting. I think your thoughts about a far more interactive and physical Desktop experience are excellent and is certainly an avenue that should be considered as it will allow greater flexibility for Desktop environments as a whole and allow greater accessibility too.

    As a KDE User myself I find that the predictability and skill set of the GUI to be equal to, if not greater than that of Windows’ GUI, this is what appeals to me, although using a Mac also, GNOME was very easy to use.

    I agree with your comments in regards to the User nowadays having to learn how to use a Computer, but I think it should allow for a transparent skill set to be used also; that is that it should be easy for a User who is new to Linux’ GUI’s – KDE & GNOME et al – to transfer their previous navigation techniques with ease and apply them to their new environment.

    As with Linux itself, the User can customise their Interface to the degree that they wish to; therefore not reducing functionality, but simplifying and/or extending the abilities of the GUI in relation to the Users needs, creating a flexible, targeted environment no matter what experience the User may or may not have.

    As Matthias Ettrich’s 1996 NewsGroup post reads regarding the KDE environment -

    “In my humble opinion a GUI should offer a complete graphical environment. It should allow a user to do his everyday tasks with it, like starting Applications, reading E-Mail, configuring his Desktop; all parts must fit together and work together.”

    “The goal is not to create a GUI for the complete UNIX system or the System Administrator; the idea is to create a GUI for an End User.”

    Regardless of your GUI environment, I think the above excerpt should be noted for its relevance, not simply in relation as to where Desktop technologies currently reside, but in relation to advancements of these technologies too.

    :smile:

  • Joe Buck

    The kind of physicality you describe makes sense for the user who is trying to create a work of art. But if the user has to physically manipulate every object, the computer doesn’t really assist the user in speeding things up much: anyone remember the user interface Tom Cruise used in “Minority Report”? That’s the kind of thing you want if you want to force your users into physical exercise to improve their health, but most people couldn’t deal with all those vigorous Wii-style gestures for more than twenty minutes or so. We need a desktop that doesn’t cause repetitive stress injuries!

    Instead, think of all those situations where we experienced users switch to command lines, Perl, or Python. A typical situation is that we have a thousand images and somehow want to select a hundred of them and organize them and transform them in some way. How can we make this kind of thing intuitive to a non-programmer?

  • Kanenas

    About physicality: When ever i hear about physicality i think of an “object” model. Everything should be an object with “actions” and “properties”.

    Also every object’s actions and properties should be modifiable by different programs. So you could have multiple programs adding actions or properties to an object.

    Why doesn’t the desktop in gnome have the option to change the resolution, or change the number of desktops when you right click on it?. These are “properties” of the desktop object and they should be realized by the corresponding programs.

    When ever you install a new program in a gnome desktop the new actions and properties it enables should be registered with the corresponding objects.

    Also the Risc Os’s file operations is a great model to follow. When ever you wish to save, you just drag the document icon to the file manager’s window corresponding to the directory you want it in.

    Right now only the loading works that way which is a pity.

  • Robert Devi

    When ever you wish to save, you just drag the document icon to the file manager’s window corresponding to the directory you want it in.

    This is a good optional approach (I used in in OS/2 and thought it was useful), but it’s not a good general approach because: 1) It’s not good accessibility (visual and motor disabilities) 2) touch typists 3) discoverability — having a save menu item tells you that the document can be saved. It’s not at all obvious that a document can be saved with the drag and drop model unless you actually try it out.

    The object model is a good approach, but it faces the problem of scalability when you allow actions to auto register for objects. Just imagine right-clicking a button and having a 1000 item menu pop up. Another problem is the multi-select issue Joe Buck pointed out.

    If your data is homogeneous, there isn’t an issue other than being cumbersome to find and select (especially if you accidentally unselect and have to start again) and the fact that sometimes you want to open up one application per program (e.g. a transformation tool) and sometimes you want to send all the files to a single application at once (e.g. a music player … you don’t want 1000 separate instances, just one with 1000 file playlist ). It would be nice to have a “shelf” where you could select/drop items and apply actions to them in this case multi-select case, but I’ve yet to see an intuitive interface for this.

    Where the object model really breaks down is for multiselects on heterogenous data. Some times it’s possible to forcefit data to be uniform (i.e. if some text is bold and others are not, you can have the option to ignore the current settings and bold or unbold all), but other times it isn’t (converting to PDF makes sense for a document but not for a socket). You also get into amusing situations where the same word means different things (e.g. a “Draw” method for a canvas object would behave very differently from a cowboy object). I’ve yet to see a good way to handle this automatically in an intuitive way.

  • http://www.loudmouthman.com Nik Butler

    Well ive been uhming and arrring over blogging about this since working with so many clients whose employees are using windows ( not trained to use windows I should add ) but ive twittered it instead so I will leave you with this for now; http://twitter.com/loudmouthman/statuses/14537041 ; in relation to the absolute end user admin and day to day workers who just use applications it sums up my frustration with every interface that I watch them try to handle. But thanks for kicking off this conversation . I will be back.

  • http://www.the-gay-bar.com/index.php/2007/03/29/good-read-for-interface-designers/ I wanna spend all your money … » Blog Archive » Good read for interface designers

    [...] Jono Bacon, a GNOME developer has summarized some ideas that need to be implemented for the “next big GNOME release”. [...]

  • http://tola.me.uk Ben Francis

    Hi Jono,

    This stuff bugs me a lot. I’m doing a degree in Interactive Systems so it bugs me even more than most. I have a lot of opinions on this topic which I won’t go into here because I bang on about them enough on my web site.

    One thing I will say here:

    Stop talking about the desktop. Please, why is everyone obsessed with this ancient design metaphor? My life is not a desk. There are very few tasks that I use my computer for that I would sit at a desk to do. Awesome new 3D technology comes along with enormous potential for innovation and what do we do with it? We put six frigging desktops on the sides of a cube! What is with that?

    I believe in the Free Software dream. I don’t believe we will get there by creating a better desktop than everyone else. We need to be thinking in terms of tasks, in terms of “information appliances” to carry out those tasks (nod to Donald Norman), and the huge potential of open standards and networks to connect them together. That’s where Free Software’s strengths will be, that’s where Linux already is already becoming the most successful – sitting on web servers and embedded in devices.

    I have a head full of ideas on this, I’m currently trying to figure out ways of making the time and money available to make them happen.

  • http://www.jacobsen.no/anders/blog/ Anders
  • Damian Wojslaw

    Uhm. Let me disagree with desktop becoming more real life like. Real life desktops, shelfs and so on work in three dimentions. (Lets stop on three, okay?:)). But your desktop in computer is a twodimentional representation fo theese three dimentions. When you turn around a document to scribble on it in real life, you do this effortlessly and instantly. In your twodimentional representation you’ll need a workaround, like special mouse gesture or a menu item or keyboard shortcut. In which case, all that it becomes is a wonderfully visual distraction. Until the day, that we build a 3d display which can accept user input directly, I’ll stay with my today GNOME/KDE desktop. Still, wonderful vision to pursue.

  • Nico

    [Quote]Do you ever have those experiences where you think “it would make sense if it worked this way, I wonder if it does” and to your surprise it does?[/Quote]

    That reminds me Google-software! I often think: Working this way would be cool. I try it, and woow… impressed, it works like this ! Of course they have a lot of money, and make a lot of interface-interaction analysis and research. But this pays-off, their products a great and usable !!

  • nim-nim

    Reading this I can’t help thinking Jono is very new to the usability field. I though about the same things once. Then wonderful sites like the “UI hall of shame” drove the point home that things are not that simple.

    I hope Jono finds the time to speak with usability professionnals and learn why a lot of his proposals didn’t work for others. And then use his drive to create an actually new desktop (as opposed to re-creating many mistakes other made before)

  • http://commandline.org.uk Zeth

    Well another major thing holding back innovation not mentioned in the post is that any innovation in the Gnome desktop would make it look less like Windows, which for some people is a feature.

    One ‘free desktop’ that has felt able to do this is the OLPC’s sugar interface, and I think it is something like what we all will be using in ten years time.

    I think the problem is with the concept of ‘programs’. If you look at the one laptop per child, you can go from writing text to looking at the web to chat, and you have not (knowingly) opened any programs, its just one giant metaprogram. I think the idea also is that in the bagckground, some programs are loaded predictively before you even know you want them.

    So getting rid of all the icons and menu bars I think is the way forward. Gnome should hide away the fact you are using Firefox/epiphany or Openoffice or totem. To the user It is just a video, or just a bit of text.

    Another cool thing that the OLPC has is that the web browser is built into the desktop, you never really close it.

    Getting rid of all the brand names (firefox, openoffice etc) and having the core apps built in would be quite a shift in mindset. A nice start would be to make the apps more modular and more lego-ish. So I can open a gedit tab in the web browser for example.

  • http://iruel.net/ Sardaukar

    I believe Gnome has hit a kind of wall – in terms of desktop UI, it has reached a point where it is very, very good (in the usual, “PC”, “folders”,”do like the computer”, kind of interaction ) and herein lies a dilemma: we can continue to refine 2.x ad eternum or move towards a radical new paradigm. Still, a LOT of people that use computers these days are already accustomed to clicking, double-clicking, drag-n-drop and so on.

    I for once do not wish my desktop to look like my desk, not because it is a mess (and cleaning up would be a drag) but because I’m not using a real desk. So, better ways to work with a PC may exist. We have the potential to transcend physical boundaries of real life with multi layered desktops and see-through windows and rotating whatnots.

    Sticking with the constraints of reality to improve usability would, speaking from my own perception, alienate users used to this transcending of concepts.

    Still, thinking of something other than the current way to do things is positive, without a doubt.

  • http://www.qdh.org.uk/wordpress/?p=154 Quick and Dirty Hacks » The finishing touches are important

    [...] After reading and listening (LR) to Jono Bacon rant on about the direction of GNOME a subject on which I whole heartily agree and something you’ll see me working on in the future it seems strange to me that GNOME don’t have teams to deal with desktop continuity. Usability is one thing, and is the big buzz word being thrown around at the minute and since guadec. However usability being an important issue as it is, it must be connected to continuity of the desktop. In movies they employ people to make sure that if a actor/actress is wearing a red top in one scene, he doesn’t end up wearing the blue top featured later in the movie in the next scene even though the filming of the two scenes are back to back. Wardrobe and continuity make this happen, continuity ensures that the flow of the film isn’t damaged by inconsistencies that viewers will notice. [...]

  • http://www.qdh.org.uk/wordpress/?p=58 Quick and Dirty Hacks » Neural ear: A first step into human interface AI

    [...] ** UPDATE Apr 2007 ** Maybe I shouldn’t have just thrown this out there, it seems apple are now working on similar ideas. That’ll teach me… Also this relates to Jono’s recent post about Organic user interfaces May 30th, 2006 [...]

  • http://www.worldofmu.com/?p=8 GNOME 2.18 Shows Incremental Improvement |

    [...] While working on this review, I noticed several developers on Planet GNOME talking about ideas for the next release, a roadmap process, and the need to start thinking about a GNOME 3.0 or next-gen GNOME, so maybe GNOME will come up with some radical improvements in the nearish future. I suspect that KDE 4.0 will provide a kick in the pants for GNOME folks to think about “catching up” when KDE 4.0 is released. [...]

  • Scott

    [...] We need our applications to be aware of what the user wants to do [...]

    Equally critical is knowing what the user /is/ doing. I have always been dumbfounded by how oblivious my computer is about what I’m doing. Case and point: the screensaver turns on while I am watching a video online. I do multiple things with my computer at the same time; each ought to be aware of the others. The screensaver is the perfect app to illustrate this point since it’s all about intuiting user state (at or away from the screen). Screensavers ought to:

    • Never /ever/ start while I’m watching a video. With Totem, with VLC, in a browser; any video, any place, no screensaver.

    • If I jiggle the mouse immediately after the screensaver comes on, I’M BUSY. It should wait longer before coming on again. And longer still if it happens again.

    • I read loads of text every day. Emails, websites, Wikipedia articles: It would be trivial for the computer to statistically calculate my average reading speed based upon the volume of text and the speed at which I scroll. The screensaver should /never/ come on before I have had time to read all of the text on screen.

    My screensaver is set to 2 hrs because its too stupid to bear. If it were smart, it might actually be useful. Another example of universal state info: if I do /anything/ with audio (play a video, start a VOIP call, begin recording from my mic) my music should automatically pause. If I finish my interstitial aural activity within a reasonable time (a few minutes), then the music ought to resume. But don’t stop there. If I open a video but then mute it, bring back the music. Maybe even give me %50 music volume when I set the movie to %50 volume. Simple stuff like that.

    “I’m listening to music. Don’t you know that?! Who’s the freakin’ logic machine in this relationship!”

  • Scott

    A further note on my above comment: I think a key component to the useful user state info for applications is standard DBus APIs for common tasks. You should be able to use any music app you want and DBus will provide the API for pausing, playing, volume change, etc.. Same goes for all other common tasks (watching video, writing, IMing, emailing, &c.).

  • Jarlath Reidy

    I agree with most of what you said Jono. But as someone mentioned, contextual tools need to be carefully executed. Jokosher seems to do this very well from what I’ve seen, but a lesson could be learned from MS Word. The menus begin to hide unused functions over time, and I see in my office everyday, people and myself, an experienced computer user, looking for something I thought was ‘around here somewhere’ previously.

    I totally agree with your belief that the computing experience would be more intuitive if it emulated our real tactile environment suitably. On a minor note, I made an attempt a few years ago to make a different type of music player for this reason. Lack of programming expertise and maturity on my part meant it never made it past concept really but if anyone want’s a look or a laugh -> http://www24.brinkster.com/jreidy/

  • MikeC

    Pile theory is just replacing one imposed (And, I agree, outdated) metaphor with another. The screen is flat and so are our images and documents. It’s good that people are experimenting with alternative interfaces but much of that experimentation is just that. Heavily 3D workspaces often look cool and are great fun but their contribution to actual productivity is highly questionable.

    The user needs different views for searching (contextual responses to queries), browsing (everything is a playlist), sorting (piles could be useful for sorting but shouldn’t be imposed) and working.

    We should definately question the messy window paradigm and the desktop metaphor is outdated and broken. Moving to a psudo-naturalistic post-modern concept won’t fix it.

    For most daily computer tasks, people shouldn’t have to think about apps at all. (Hmm, which app to open this photo in?)

    If you base the interface around:

    People – this is an obvious one.

    OBJECTS (Data, Text, Images, music, videos) Why can’t you just create a new table object then call spreadsheet-like functions as you need them?

    Associations (Metadata, this photo is associated with these people and will appear in the “virtual folder” ( Or whatever) associated with them.

    Views (consistent but context sensitive views, both structured and yes, free-form and semi-free-form. lists, icon – variable size, WORKSPACES – freeform space containing context data for particular tasks) Variable views for contacts, pull the border to view increasingly more data, starting from a photo-icon the with name, then more… up to showing all the objects associated with that contact)

    Collections (smart folders, playlists, photo albums, contact lists, projects, all-the-files-bob-has-ever-sent-me. Think of project folders which show all the people and objects associated wiht that project but not as a simple list!)

    Actions or tasks (Editing text, sending, painting, adding a layer to an image, chatting, searching, browsing etc.)

    Tools to perform actions (context sensitive. ref:Apple inspector for iWork apps, context menus, Office Ribbon etc.) You should be able to click on an image in your finder/file manager and perform common editing tasks without going any further but the transition to more complex editing shouldn’t make you feel you are opening a particular app (Especially not the GIMP!)

    You won’t go far wrong. The above are not metaphors but what is actually going on between the user and their computer. None of the above requires or justifies much 3D, although an accelerated and rich-looking UI is great. There are many tools and frameworks (Tracker, Gstreamer, Telepathy, gegl, Dbus) which are maturing nicely to allow a more integrated experience. Dare I mention such things as Tinymail and the Dbus port of EDS?

    One thing to consider about UI is that if it’s good on a small screen, it probably translates to a big screen well but the opposite is not true. Linux has huge potential on portable devices.

    Customisation is great. Look at Adium on the Mac. This kind of customisation could be applied to many different cases. Diary/calender/journal can be themed, watermarked etc. WebKit….

    The current window paradigm wastes lots of space and requires lots of window resizing/moving/finding. There’s a lot to recommend a non-overlapping interface (mmm, panes, not windows).

    Lots of data the computer provides (New mail, person signed on, non-critical system messages, news items) could be sent to a standardised notification framework the the user can choose how that is displayed (menubar).

    I think I’ve ranted enough for now…

    Oh yeah, how about adopting D? It could be the basis of an entire “better than OpenStep” OO/OS framework. Or is that a suggestion too far? :grin:

    Mike C

  • http://troy-sobotka.blogspot.com troy_s

    Design school 101:

    1) Audience? 2) Goal?

    It’s pretty simple. Your statement “and ultimately no-one is 100% correct.” is probably a little incorrect in and of itself.

    There is no such thing as ‘general computer interaction’. It’s a myth.

    Pick an audience and pick your goal / communication. Inevitably it is why Apple is much lauded on the OS landscape. Bear in mind, they also have a pile of well educated and trained designers.

    If GNOME wants to forge ahead, it needs to quit worrying about they mythical ‘everyone’ and more about ‘you — yes you there!’.

  • http://www.anothr.com/feeds/track/57804 anothr user

    Anothr feed track -Comments on: Organic interface design for GNOME…

    One new subscriber from Anothr Alerts…

  • Arkadi

    I’m so agree with you. I am a Ubuntu fan and using Gnome. I want to help with my programming skills but for know i have some exams to pass. This is must have feature in a home desktop opereting system. I never wrote any software on linux but i will begin soon, and my goal is to polish every thing and add some missing functionallity in the gui parts.