Friend Me
 Follow Me
 Feed Me
a blog by ken pardue

Archive for the 'Source Code' Category

Wrapping my head around Ember.js.

Tuesday, April 17th, 2012

So, I’m a PHP scripter that hasn’t done a lot of coding in a while. I’ve done some, but not a lot, of JavaScript, and college was, well, let’s just leave it at lacking in formal programming education. It seems pretty clear that the next step in responsive web applications is to run the entire app on the client side, only communicating with the server to download the application file and subsequent database communication. I get that. I also like what I’ve read about Ember.js and its approach to solving these problems (but I’m open to other solutions, too).

I’ve read the information on the Ember.js home page, and I get most of it (I think), but what I’m lacking is knowledge about how to structure my app as I start developing a slightly more complex site/app. There’s so many different ways people are solving problems that there’s no single this-is-how-you-get-from-point-a-to-a-functional-web-app guide. And if there were, I’m sure they would assume that I have knowledge about structuring MVC apps and advanced JavaScript to get there.

So here’s where I’d like to go with Ember.js: I want an app that has two main templates: one for user login which is served up initially, and one to house the rest of the app after logging in. After logging in, I want a main navigation pane that controls/changes out content in another pane (eventually based on user permissions, but I’ll start with just having it work).

How should I structure my app?  Here’s what Ember.js provides out of the box in their starter kit:

So, do all of my models, views, and controllers for the entire app go into app.’s?  If so, how do I structure those components inside the file so that they are easy to navigate?  How should I keep them organized, by components, or by keeping all models together, views together, etc?  If not, how do I load the separate components?  Should I have folders for models, views, and controllers?  If that’s the case, how do I load them?  Do I have a giant list of <script src=* elements in my index.html page?

As I’m planning out my components,  I’m assuming that something like this would be what I need to do:

If I’m understanding this right, I would have an appController that would launch when the application first begins.  It would have a loggedIn property.  I would set an observer on that property such that if it were false, I would do a view.append() a login template to the Site Handlebars Template.  Otherwise, I would view.remove() the login template, and view.append() the main page template.  The main page view would contain a Navigation view that would never change, but itself controls what shows up in the Content View.  It would load whatever template is associated with the Component Button that is active, and each template would have its own models, views, and controllers for the functionality associated with it.  (how do I load in the extra templates and their associated models, views, and controllers?)

Is that anywhere near a correct big picture view?  How on earth do I structure the files and code associated with that?  I’m sure state charts play some sort of a role here, but I have no idea how to plan them out and implement them.  I’m encouraged that they appear to be built into ember.js now, according to the website, because there’s so much new material to learn here I’d rather focus on what’s available out of the box rather than how to do my task with a half dozen different frameworks or components.

In addition to structuring the app know I still have to learn the process of writing a RESTful back end, another huge learning task in itself, but I’m okay for the time being just using fixture data.  Heck, even if I used more traditional methods for developing the components I’d be okay; I’d just really like the responsiveness of an interface like the above that’s super fast.


The Open Web Isn’t Always Free

Wednesday, June 10th, 2009

I thought that I’d blow a little of the dust off of this blog to write a slightly-longer-than-tweetable rant about the state of the open web, and more specifically about the state of HTML 5 and open video.

For those that aren’t technology enthusiasts like myself and really spend time keeping up with this stuff, some absolutely amazing things are being done on the Internet today.  Thanks to improvements in the way JavaScript is processed, the day when our applications reside exclusively in the cloud seems a lot closer.

One of the buzzword standards that browser makers are tripping over one another to support presently is HTML5’s support for <video> and <audio> tags in web pages, which aims to make planting and viewing rich media on the Internet as easy as it is to drop an <img> on the page.  The goal is to get away from plugins like the resource-hogging Flash, and make videos scriptable.  Mozilla and Webkit have both made some amazingly impressive demos using the technology.  Browsers will one day soon support the native display of video just as easy as it is do view a jpeg, png, or gif image.  Ah, but there’s the rub… which video format should browsers be able to play?

H.264 is the gold standard of video compression format baked into pretty much everything.  Hardware acceleration on video cards?  Got it.  iPhone?  Got it (in fact, H.264 is pretty much all it’ll play.)  Bluray?  Got it.  Google Chrome?  Got it.  Apple Safari?  Got it.  DivX 7?  Got it.  Windows 7?  Got it.  Mac OS X?  Got it.  Quicktime X?  It’s practically the house built upon the foundation of H.264.  Even the White House uses H.264 MP4 files.  The catch is that H.264 has been carefully marketed by a group of patent holders over the better part of the last decade to increase market adoption, and adopt it did.  However, in 2011 the grace period on both encoder and decoder expires and licensing fees will need to be paid to the MPEG-LA group.  That makes it a lot less attractive for those advocating an open, free, and standard Internet.

On the other hand, there’s a format called Ogg Theora, which is a little like the red-headed step child that lost the video race in the late 1990’s and was forgotten about.  Although there are no guarantees against submarine patents, Theora claims to be patent, license, and royalty free.  The problem is that the format is, although improving, very poor in quality, has virtually no support for hardware acceleration, and isn’t widely implemented.  It’s so inefficient, in fact, that it’s been argued that the excess bandwidth cost from Theora video would outweigh the cost of licensing H.264.  By nature of it being free and open, however, it has been chosen by Mozilla as practically the second coming, who has even invested $100,000 to improve the codec quality and distribution.

Although it’s unclear what will happen to the state of H.264 encoders and decoders come 2011, what seems likely is that even the free and open source solutions like x264/ffmpeg will no longer be able to legally be distributed in the United States.  But, as nice as the patent-free and license-free concept around Theora is, without hardware and major vendor support it’s going to be stuck in geek enthusiast circles.

So, instead of this wonderful world where developers can drop a video into a web page in a single format and be confident that it will work, we’re back in the 1990’s and Flash Video looks like it’ll never go away.  In order to realistically support HTML5 video, developers are still stuck wondering what codec to use or waste valuable computer cycles and bandwidth to support both.

There will always be a need for fallbacks, since Microsoft is a lumbering buffoon and isn’t likely to support <video> any time soon.  But the thing is, even Flash plays H.264 video.  Were Mozilla to have elected to do the logical thing and license an H.264 decoder, a web developer could have a single video file encoded in H.264 which would play in all modern browsers, iPhone included, and then, as detailed here, simply load that same H.264 file into an Adobe Flash player as a fallback.

Unfortunately, all the other browser makers that implement H.264 combined don’t add up to Mozilla’s market share.  Ultimately, though, what will decide what format gets accepted as the baseline standard for HTML5 video will probably depend on two things: Youtube, which carries the vast weight of Internet video on its back, and mobiles (phones+netbooks).  Mobiles will need hardware acceleration in order to efficiently play the video on limited battery life, which doesn’t exist for Theora.  And guess what?  Youtube has been experimenting with HTML5 video as of late, and guess what codec they’re using?  Yep.  H.264.

Give it up, Mozilla. Just license the darned decoder instead of making a political statement.

Centralized Data Services in Linux

Friday, August 29th, 2008

Mozilla Gecko Office – Why Not?

Wednesday, August 27th, 2008

So I’ve been using the OpenOffice 3.0 betas on my Mac and I just can’t get past the feeling that the folks at Sun are just trying to keep up with the 1990’s.  While it is nice that the latest version runs under OS X without using X11, it must also be realized that it remains slow and cludgey to the point of frustration.  There weren’t enough features added to justify a major version jump, although somehow OpenOffice has taken a major jump down with performance.  Grant it that these are betas and somewhat better performance is to be expected from the final release, but NeoOffice compares only slightly better.  Scroll speed is very jerky (sometimes freezing between page switches), text appears poorly antialiased and poorly kerned on Windows and Mac (and downright abominable on Linux), images appear jagged and seem to move around the page inexplicably, manually positioning images and text frames within a page of text is guesswork at best, and the interface… well let’s just not start on that.  

And yet, is the poster child for open source office suites.  It’s included by default in nearly every single Linux distribution and is proclaimed as the Microsoft Office alternative on Windows.  It, and derivatives of it, remain the only viable implementations of the OpenDocument format.

Certainly there must be a better way to do this.  Certainly there is a way to get consistent cross-platform performance with high quality text and image rendering and support for networking and the impending eventual move to cloud-based applications?  I think there is, and the answer lies with Mozilla.  Mozilla stormed onto the scene several years ago and today has become the cornerstone for BOTH open standards advocation on the web AND for intuitive, navigable, and ultimately usable user interfaces.  Why not make a Mozilla Office Suite?  There are many arguments in favor of this:

  • Gecko is a mature platform that claims 140 million Firefox users as of February 2008 (probably many more now that Firefox 3 has been released) and 48 million Thunderbird downloads, versus 98 million downloads.
  • The ethos surrounding Mozilla is one of providing the end user the best experience, not necessarily the most options.  This has led them to develop a platform that is extremely light weight and focused on performance.
  • Much of Mozilla’s products are written in JavaScript, which would seem to be a hindrance on a large scale office suite, but the most recent builds of an optimized JavaScript interpreter approaches native code speeds, with even more improvements on the way.
  • All of the networking components, text rendering/kerning components, and image rendering and scaling components  are already in place and are well tested across all major platforms.
  • There is a proven extension system with automatic checking for updates polished and in place.
  • A lot of the basic composition functionality is already contained in the Thunderbird project.
  • Mozilla is now working to support open and platform specific multimedia frameworks more tightly into their products, with the inclusion of Ogg Theora native support in the browser right alongside support for the video framework for whatever platform it’s running on (Quicktime for OS X, DirectShow for Windows, GStreamer, etc., for Linux).  This would be a boon to those using embedded video in documents, or more practically, in presentations.
  • Since OpenDocument is DOM based, it would be an easy transition to make native rendering of OpenDocument files available for viewing and collaboration on the web.  Imagine the maturity of Google Documents if you could leverage Mozilla Office’s capabilities?  It would be the single best way to turn XUL-runner into the ultimate stand-alone platform like some have recently talked about doing.
  • Not anything specific to Mozilla here, but the user interface could be optimized with tabs for different documents, a platform-specific look and feel that feels at home regardless of what platform you’re on, smooth scrolling through documents (I pasted 150+ pages into Thunderbird, albeit without images, and it scrolled through it satisfyingly smoothly), and much, much more.
So, in short, Mozilla Office for President 2008!  Now, who wants to code it?

Shuttleworth is the Man!

Wednesday, July 23rd, 2008

I’ve always wanted to be a Linux guy, using and supporting as much as possible the philosophy of Free, Libre Open Source Software, but every time I’ve been put off by the amount of time involved in getting simple things done (one should NOT have to go to Google to figure out how to add fonts to the system) and the fact that the graphical experience was either too mundane or so effusive that it actually got in the way of the user experience.  Don’t get me wrong, I’m a developer and a power user, but I’d much rather be spending my time being productive than tweaking in a terminal to infinity.

So a few years back, Ubuntu came onto the scene declaring that the user should never have to go into the command line to do routine stuff and, over the past few years of releases, has slowly made Linux easier and more intuitive to use.  Now they’re setting themselves the lofty goal of targeting Apple in terms of user experience.

The idea of a freely available operating system fostering the growth of technology in the developing world and the embrace of open standards has always intrigued me.  The more I read about Mark Shuttleworth, the more I like him.  My favorite quote from his recent OSCON keynote: “The great task in front of us over the next two years is to lift the experience of the Linux desktop from something that is stable and robust and not so pretty, into something that is art.” Art!  From a Linux guy!  This guy really should be on Apple’s Think Different commercial.  He’s one of those people who’s crazy enough to think he can change the world.

Now, don’t get me wrong.  I love my Apple computer and doubt I’ll be switching my primary OS any time soon.  Apple has set a wonderful precedent in user experience that others will be hard pressed to exceed and also embraces some of the same open source philosophies that I do.  But I’ll definitely continue to keep my eye on Ubuntu and the inspiration that Mark Shuttleworth brings.  After all, Steve Jobs has never been to space.

WWDC 2008: Pinning My Hopes and Dreams

Friday, June 6th, 2008

It’s that time of year again.  Twice a year, in January and in July, something special happens.  Journalists’ and bloggers’ keyboards are aflutter, eye-strain headaches abound from staring at grainy “spy shots” of a certain theater in San Francisco, and the rumor mills swell uncontrollably with what Dear Leader, Steve Jobs, might unveil.

This year’s Apple Worldwide Developer’s Conference is obviously no different.  The past few days have seen the almost certain prediction of the iPhone 3G and the probably rebranding of .Mac to MobileMe.  But there’s always something that slips in unnoticed.  I originally thought that it was way too soon for us to be hearing anything about a new iWork update, since iWork ’08 hasn’t been out for very long.  But all the speculation about a possible OS upgrade has me thinking otherwise since Leopard came out months after iWork ’08.

Personally I hope (as I have anxiously hoped for the last two iWork releases) to see Apple get firmly behind the OpenDocument standard for its suite of programs so that iWork gains a TRUE place in a mixed platform corporate (and home) environment. OpenDocument makes a lot of sense for the following reasons:

1) Apple has a history of supporting open standards where it bolsters its business and reduces the complexity on their own developers,

2) it would be FAR easier for Apple to implement than native support for OOXML (heck, it’s even easier for Microsoft to implement in their OWN product than OOXML),

3) No more dialogs asking, “Do you want to save this in iWork ’06 format, iWork ’08 format, iWork…. ” What’s good for one is good for everyone.

4) OpenDocument is extensible so they could… possibly… implement such features as Numbers’ multi-table-on-a-single-sheet feature (not sure about the viability of this one), and

5) it will make Apple not look like they’re drinking Microsoft’s Kool-Aid, while, when native ODF support is added to MS Office next year, Apple will be totally compatible and competitive with not just most Windows users but Linux/open source advocates too.

6) Apple obviously has expressed interest in heating up competition with Microsoft on the desktop since the disaster called Vista.  If Apple ever hopes to bring iWork to Windows, joining iTunes and Safari, they’ll need to have a document format that’s not based on bundles.  A .pages file is just a folder as far as Windows is concerned.

Of course, the obvious argument against this is the tremendous effort that Apple has put into evolving its own XML document format.  It’s hard to see Apple just tossing all their work that brought them so far so fast in iWork’s three year life.  But for myself, I would love to see an ODF-native iWork so that I can use a program with Apple-pizaaz and not have to depend on the upcoming OpenOffice 3.0.  While it is the best ODF program on the market, they just don’t “get” the Mac platform.  Their clunky beta looks and feels like it belongs on a Windows ME installation, not on Mac OS X (or just OS X Leopard, as the new banners seem to have rebranded it). 3.0 Beta Thoughts

Wednesday, May 7th, 2008 beta was released today.  I think I can already post about it since it appears to be the same build as the BEA300m2 developer snapshot that I had been using.  Overall, it feels like a lackluster release that hasn’t received much usability love.  Really, you’d expect a lot more from a product that has broad corporate support from Sun Microsystems and IBM and is the de facto standard cross-platform office suite.  There’s a problem when your main version release takes upwards of two years to make and the big features that you highlight are “the new ‘Start Centre’, new fresh-looking icons, and a new zoom control in the status bar”.

I hate to tell the OpenOffice devs, but these ‘new fresh-looking icons’ passed the point of being either new or fresh looking around 2001. I know I’m a Mac guy and probably vain about my user interface, but seriously… these icons are unattractive at the small size, and downright hideous at the large size.  Tango icons look much better, and Tango is nothing to write home about.  Thing is, if it weren’t for those icons you wouldn’t even be able to tell the difference between 2.x and 3.x.

There seem to have been very few, if any, usability improvements.  Apple is doing innovative stuff with iWork Pages in simplifying the UI and adding context sensitive formatting; IBM is doing some innovative stuff with Symphony by putting all of the context-sensitive editing on the right side of the screen to take better advantage of documents being vertical and most new monitors being widescreen; Microsoft is doing usability studies and trying to find a way that works better for their users, although there have been some issues with the “Ribbon,” at least they’re trying.  I understand’s philosophy is ‘looks like Word ’97’, but can’t they find a better key selling point than “you should use our product because we don’t evolve from a familiar, crufty old interface.”

Some months ago, one of the developers was arguing against critics of’s look and feel, saying that it could and would be made to look native on platforms, OS X in particular.  And one person posted on the 3.0 roadmap wiki extolling the merits of taking the approach that IBM was with Symphony.  I guess these persons weren’t very high up on the food chain.

I’m a strong supporter of open standards, OASIS OpenDocument in particular.  I whole heartedly believe that OOXML is wrong to be a standard because of the lack of attention to technical flaws, complexity, and less-than-a-single-vendor implementation (not to mention how the whole standardization process went down).  But, given the ISO’s approval of OOXML and the fact that this new represents the “best of” breed in ODF suites, I’m afraid that we’d all better start learning to speak Chinese… that is… recognizing OOXML.  Actually, I guess everyone else already has.

I realize that this is a lot of criticism for a fresh out of the oven Beta, but I also realize that there’s not likely to be many UI changes between now and 3.0 final in September.  At least I can count on some performance improvements though, because the Beta that I’m using runs like a crippled dog on a quad-core Mac Pro.

Lightning Looking Good

Friday, May 2nd, 2008

Lightning is supposed to reach 0.9 in the August timeframe, and it’s going to be a long wait.  I haven’t used Lightning because the interface was so kludgy to me that I didn’t feel like it was making me productive (yeah, superficial of me, whatever).  But at the recent Calendar face to face the developers put a lot of spit and polish into how the calendar works and addressed some real usability issues, focusing on giving the user the most important information (and no more) in a modern, attractive way.  God bless them, they even removed the 2px border on the months and replaced it with something less fugly.

A developer’s outline of some of the changes can be found on Bryan Clark’s blog here, and additional interface mockups can be found on the Mozilla Wiki here and here.

Now, I’m hopeful that the Thunderbird devs will also apply the spit and polish to the 3.0 release due out at the end of the year or (more likely) early next year.  I’d love to see Thunderbird come into the modern age of email and set defaults that people actually USE instead of being idealistic about how email SHOULD function.  Specifically

  • The account setup is a mess, and there are way too many redundant options between Options and Accounts
  • All modern email programs just assume that you’re going to be using HTML.  I don’t know of any other (popular) program that would assume that you’re sending plain text or put up an annoying prompt to send in plain text, html, or both.  I know that email *should* be in plain text and there’s no reason for it not to, but people just don’t use it that way.
  • All modern email programs also assume a sans serif font for message composition.  While serifs are great for printed documents, it doesn’t have as usable place in the world of electronic, on-screen purposes.
  • Why is the default behavior set to put the reply BELOW the message being replied to?  I mean, I understand that as a holdout from the newsgroup days it makes more logical sense for the conversation to flow properly from top to bottom with the more recent stuff at the bottom of the page.  But seriously… who uses email like that?
  • Nearly every email program I’ve ever seen that people actually use forwards messages inline and not as attachments.  Why does Thunderbird insist on the default being to forward as an attachment?

I know those are a couple of items that have been controversial within the developer community before, but whenever I recommend Thunderbird to someone else I find that they either stop using it or ask me to change it to work like Outlook Express.  I know those options can be changed, but it’s a confusing process to do so in the plethora of options menus.  It’s time to do to Thunderbird what Mozilla did to Firefox: Simply, simplify, simplify, and add better defaults!

Creating a XULRunner 1.9 App on OS X

Tuesday, April 29th, 2008

I’ve just created my first simple XULRunner-based application.  In order to make creating the application work in OS X, a number of different steps have to be taken from the Windows version.  Unfortunately, there doesn’t seem to be a comprehensive guide for newbies to do so, so I created my own based on several resources.  Since much of what follows is direct quotes or slightly modified, I want to be sure to give credit where credit is due:

Step 1: Install the XULRunner Framework

The first step is to download and install the XULRunner Framework.  XULRunner may be downloaded from here:  On the Mac, just run the installer, which installs XULRunner as XUL.Framework in the /Library/Frameworks directory.

Step 2: Set up the Application Directory Structure

I created the root in a new /Users/{username}/Desktop/approotfolder folder, but you can create it wherever you like. Here is the subfolder structure:


Notice that there are 4 files in the folder structure: application.ini, chrome.manifest, prefs.js, and main.xul.

Step 3: Set up the XUL Application Files


The application.ini file acts as the XULRunner entry point for your application. It specifies how your application intends to use the XULRunner platform as well as configure some information that XULRunner uses to run your application. Here is mine:

Name=Test App
Copyright=Copyright (c) 2006 Mark Finkle



The chrome manifest file is used by XULRunner to define specific URIs which in turn are used to locate application resources. This will become clearer when we see how the “chrome://” URI is used. Application chrome can be in a single or a few JAR files or uncompressed as folders and files. I am using the uncompressed method for now. Here is my manifest:

content myapp file:content/


The prefs.js file tells XULRunner the name of the XUL file to use as the main window. Here is mine:
pref(“toolkit.defaultChromeURI”, “chrome://myapp/content/main.xul”);

XULRunner preferences include:
Specifies the default window to open when the application is launched.
Specifies the features passed to when the main application window is opened.
Allows configuring the application to allow only one instance at a time.

This is described in further detail in XULRunner:Specifying Startup Chrome Window.


Finally, we need to create a simple XUL window, which is described in the file main.xul. Nothing fancy here, just the minimum we need to make a window. No menus or anything:

<?xml version=”1.0″?>
<?xml-stylesheet href=”chrome://global/skin/” type=”text/css”?>

<window id=”main” title=”My App” width=”300″ height=”300″
<caption label=”Hello World”/>

Note: Make sure there is no extra whitespace at the beginning of the XML/XUL file

Step 4: Set up the OS X .app directory structure.

XULRunner for Mac is slightly more complicated because of strict requirements for GUI apps running in OS X. First, go download XULRunner and install the package. It will create itself deep within /Library/Frameworks (quite separate from the Windows version). In myapp, create a new directory called or something else ending in .app. Within this directory create one called Contents (capitalization is important), and within Contents create Frameworks and MacOS. Now create three symbolic links to complete the Mac directory structure:

ln -s /Library/Frameworks/XUL.framework
ln -s ../../../myapp
ln -s /Library/Frameworks/XUL.framework/Versions/Current/xulrunner

If you would like to ship the application on a private install of XULRunner, you could always just copy the respective files into the XUL.framework and the MacOS/xulrunner directories.

Now create and dump this in, making sure to change things in ALL CAPS. I am almost certain this is not optimal as it repeats itself a lot. But it is functional.  Note: when using XULRunner 1.9, it doesn’t seem to matter what is in this file, or even that it exists.  XULRunner generates its own Info.plist file.

<?xml version=”1.0″ encoding=”UTF-8″?>
<!DOCTYPE plist PUBLIC “-//Apple Computer//DTD PLIST 1.0//EN” “”>
<plist version=”1.0″>

Review Directory Structure:

As a review, here’s how our tree looks now:
.. /myapp
…. /chrome
…. /content
…… main.xul
…. chrome.manifest
…. /defaults
…… /preferences
…….. prefs.js
…. application.ini
.. /
…. /Contents
…. /Frameworks
…… XUL.framework→/Library/Frameworks/XUL.framework
…. Info.plist
…. /MacOS
…… /xulrunner→/Library/Frameworks/XUL.framework/Versions/Current/xulrunner
…… /Resources→../../../myapp

Step 5: Run the Application

The moment of truth. We need to get XULRunner to launch the bare-bones application.  Before you can run a XULRunner application, you must install it using the –install-app xulrunner commandline flag. Installing the application creates an OS X application bundle:

/Library/Frameworks/XUL.framework/xulrunner-bin –install-app /<path>/<to>/

Once installed, you can run the application:

/Library/Frameworks/XUL.framework/xulrunner-bin “/Applications/Finkle/Test”

You should now see a window that looks something like this:

This application will output to Applications/Vendor Name/App Name (as specified in the application.ini).

Since XULRunner 1.9 seems to generate its own plist file (disregarding anything in the custom one), I’m not sure how to add application icons yet.  I’m sure there’s a way to specify this in the application.ini somehow, but since I’m brand new to XUL/XULRunner I can’t really speak to that.

Introducing Treefrog

Friday, April 25th, 2008

Treefrog, that is, the name that a friend has coined for my project “Genzilla,” a Mozilla/XUL-based, cross-platform genealogy application, is starting to take some shape.  I’m still very early in the would-be development of it and have a lot to learn about developing Real Applications (TM) before I make any progress, but at least I’m starting to define a goal.  Over the next several blog posts, I’m going to spend some time thinking through how this should work.

Desktop App or Web App?

The big question is whether or not to develop Treefrog as a standalone desktop application or as a web-application.  There are major benefits to developing it as a web application, beyond web applications being hip since everyone seems to be moving to the cloud computing/software as a service ideal these days.  And there are the obvious benefits in doing so: maintained control over upgrades and bug fixes, instantly visible by everyone on the domain; no installation or platform-specific glitches (other than browser JavaScript issues); easy ability to roll out the application to a variety of non-traditional platforms such as mobile devices (I could see an awesome iPhone app here); the ability to tap into social networking to enhance the collection and organization of data; perhaps most importantly, I already know how to develop web applications.  I’ve been doing that for a while.

We’re on the verge of ubiquitous Internet access.  Many phone carriers are shipping with unlimited data Internet plans these days, and I can foresee a day in the not-too-distant future where all laptops have carrier-independent cellular Internet connectivity built right in.  Unlike the days where you had to worry about offline access, we’ll be at a point where having a computer and having Internet access are synonyms.  Mozilla, like Google, is quick to promote the web as a platform for all things.  A Mozilla affiliate recently said (paraphrasing) that Mozilla isn’t really going to focus on XULRunner as a desktop app development platform and that it’s a much better idea to focus on promoting a healthy, open Internet with web applications. 

But on the desktop side, there are benefits as well.  First and foremost is that there really isn’t a good, quality genealogy program that works cross-platform.  Mac users have Reunion, Linux users have GRAMPS, and Windows users have Family Tree Maker, but there’s no good program intended to make a researcher’s genealogy portable across platforms.  It would be much easier to deal with privacy issues on the desktop rather than the web.  There are opportunities to take advantage of native platform look and feel, extensions to further manipulate the data, and the ability to work with large media files such as high-resolution photo and source document scans.  Let’s face it, many genealogists are older in age and may not be keen on the idea of using some Internet app to store all of their research.  Finally, a personal reason to do a desktop app is for the sheer challenge of it.  I’ve never done a desktop application and have always been interested in this arcane world.

Besides, Treefrog is such a trendy name for a Mozilla-based application!  It fits right in with Firefox, Thunderbird, Songbird, etc.  But, if the desktop truly does become obsolete in a few years as some predict I suppose that another benefit would be that, by choosing to develop on XULRunner, my application would be largely developed in the language of the web and should I decide to take it to the web or tie into the Mozilla Weave API’s in the future it wouldn’t be as difficult.

Launchpad Page: