Saturday, October 9, 2010

AJAX Push in iOS, Safari, and Chrome with Server-Sent Events

One of the many new APIs of HTML5 is Server-Sent Events. Server-Sent Events are a lot like long-polling. They work like this: establish a connection to the server from the client, send data from the server to the client in pieces, and if the connection is severed re-establish the connection from the client to the server and continue sending event data from the server. Both long-polling and Server-Sent Events are one-way, server-to-client messaging, so if you need to send data TO the server, you will need to fall back on an XMLHttpRequest, and the message will be sent on a separate connection.

The main difference between Server-Sent Events and long-polling is Server-Sent Events are handled directly by the browser, and all the user has to do is listen for the messages from the server, no worries about re-establishing the connection, the browser handles that part of the work. Server-Sent Events are really simple. Client-side javascript looks like this:

var source = new EventSource("/path/to/my/event-stream-handler.php"); 
//or whatever CGI you are using

source.onmessage = function(event) {
    //do some thing with event.data, it is a string
};


The EventSource class is where the work gets performed on the client side. The server-sent event stream syntax is really simple too, all you have to do is respond to an event request with the "Content-Type" header set to "text/event-stream", and start sending event streams whenever you are ready. A single event stream is written:

data: My event data\n\n

That is "data: ", followed by the string data you are sending as your event message, followed by 2 new line feeds (\n). Optionally, an event stream can span multiple lines if written like so:

data: The first line\n
data: The second line\n\n

That makes it easy to send long JSON messages without breaking the syntax, or some other long message. You will notice that there is only one (1) new line feed after the first line of the message, this is to let the client know that the message is longer, and will be continued on the next line, but the next line still begins with "data: ". Your server can end lines with 2 carriage-return/new line feed combos, or 2 carriage-returns as well.

"Great, that seems really simple, but what browsers support this"


Good News! Chrome, Safari, and YES! Safari Mobile on iOS4 already support Server-Sent Events. So you can start using this today. There isn't a lot of documentation for it out there though. Of course, Chrome and Safari both support the WebSocket API, so you may choose that over this for the larger screen devices. But being able to perform AJAX Push from an iOS device without using a hacked long-polling trick, that is AWESOME!

Example


Client-side handling is so simple, just set the "onmessage" attribute to your callback handler, or use the "addEventListener" method like so:

source.addEventListener("message", function(event) {
    //append the data to the body if you like
    document.body.innerHTML += event.data + "<br>";

}, false);


The server side part is simple too. Here is an example in PHP:

if ($_SERVER['HTTP_ACCEPT'] === 'text/event-stream') {
    //send the Content-Type header
    header('Content-Type: text/event-stream');
    //its recommended to prevent caching of event data
    header('Cache-Control: no-cache');
    //send the first event stream immediately
    echo "data: This is the first event\n\n";
    //flush the output
    flush();

    $i = 5;
    //create a loop to output more event streams
    while (--$i) {
        //pause for 1 second
        sleep(1);
        //emit an event stream
        $time = date('r');
        echo "data: The server time is: {$time}\n\n";
        //flush the output again
        flush();
    }
}

That example responds with 5 separate event streams, each one (1) second apart, to a single client request then terminates. The browser will then re-establish the connection repeatedly after each request terminates. It is important to call flush() after each event stream is output in order to force the server to send the data that it has in the buffer, but this may not work on Apache if gzip or deflate encoding are enabled, or any other server that buffers output data. If your server doesn't flush() data, the browser will end up receiving all the events at once, after the script terminates, like in this demo (yeah, I know, I need a different host).

To cancel an EventSource from the server side respond with a Content-Type header other than "text/event-stream" or with a HTTP status code like 404 Not Found. That should make the browser stop requesting to re-establish the connection.

A great server for handling these types of connections is node.js.With node.js you can handle the incoming event stream requests in a single process allowing you pipe event streams to multiple clients without using some external messaging system to communicate between threads or processes, and the client can maintain the connection to the server as long as it would like, because node.js handles a lot of simultaneous connections at once (I have heard up to 20,000 and it's fast :-)). But you can also use Ruby EventMachine or Python's Twisted to accomplish similar results.

Opera


Opera also supports a different version of Server-Sent Events through a DOM element "event-source" and using the "application/x-dom-event-stream" MIME type instead of "text/event-stream". Opera's implementation is based on an older version of the spec I think but don't know, because the event stream syntax is slightly different as well. The event stream syntax in Opera is:

Event: event-name\n
data: event-data\n\n

More information on Opera's implementation can be found at http://labs.opera.com/news/2006/09/01/.

Check out the source from the demo above, includes a node.js demo example as well.

My Conclusions

Server-Sent Events are ready for use in iOS web apps, and are easy to implement using your existing server resources, so why not use them. Sure, asynchronous operations on multiple concurrent connections can make the whole process seem a lot easier, but we don't need that.I find this subject intriguing, so hopefully I'll follow this up with more experiments later.

Monday, September 27, 2010

Promote JS!

If you care about being able to find good documentation for JavaScript, you should check out Mozilla's Promote JS campaign.

As Chris Heilman points out in his article highlighting why being able to find good documentation for JavaScript is so important, W3Schools usually makes it to the top of the list. Which is great for W3Schools, and they do provide a decent reference for the already initiated. But for those of us that are looking for a slightly better outline of what an object does or how to use it, real world examples from actual users is a delightful resource that is much better from a user contributed source like Mozilla Developer Center (MDC). And that is where we need to concentrate our efforts, supporting the foundation that helps us. So, visit Promote JS, and help build a better community for javascript documentation.

JavaScript JS Documentation: JS Function arguments, JavaScript Function arguments, JS Function .arguments, JavaScript Function .arguments

Friday, September 10, 2010

Sony should be leveraging their hardware to deliver their media

Since I have been ranting recently, I thought I would keep up the trend by sharing my thoughts on what I think is the biggest failing to capitalize on existing market share in recent history, namely Sony.

Sony is in to everything, and I mean everything. They make a list of consumer electronic devices longer than my arm. Everything from TVs, to video disc players (Blu-Ray and DVD), music players (does anyone else remember when ALL portable music players were called "Walkmans", way before the iPod), phones, video game systems, computers, e-book devices, cameras, video and still, the list is exhausting. Sony also produces movies, television programs, and music. I probably don't even know half the different content production environments they have their fingers in. So, why are they not leveraging the hardware to distribute their content?

Okay, I know they had the now defunct Sony Connect for a while, and it never took off, but let's face it. Sony killed Connect because of their fear that it would eat into CD sales. Look how that has worked out for them. Now they have BD Live, but again, it's just another example of a venture setup to fail by it's over-lording siblings. Sad really.

If Sony was really interested they could create a system like Apple has in about a year. If they created an iTunes competitor for centrally distributing all of their media content (music, movies, television, etc), they would start off with a HUGE selection of content. I don't know how the other music and movie studios would respond to such a move, but they are old stick-in-the-muds anyway. Next they would create a consistent UI for accessing this content on each of their devices, not just a few here and there, they would have a way to download songs to every phone, mp3 player, or stereo receiver they make without using a computer. Then they would add an interface for accessing movies and television from every TV, disc player, or computer that they make. The consumer would use the same account for downloading their content on any of their Sony-made devices. Sony would control the production, distribution, and consumption of its media empire through its electronics empire.

That scares me a little as a consumer, but I have so little faith in their ability to actually manifest such a media coupe, that I can honestly say that I would like to see them at least try, in a meaningful way, without handicapping the system from the start. By the time they actually realize that their old way of selling content is disappearing too fast for them to replace the revenue streams with new ones, they will probably be so far behind companies like Apple and Amazon that they will have to subject themselves to the terms set by the retailers, instead of setting the terms themselves like they have done for so long.

Whither big media, you deserve to die a slow agonizing death.

Wednesday, September 8, 2010

Big ISPs can offer services that protect net neutrality and the bottom line

I read an article a while back, before Google and Verizon actually announced their psuedo net neutrality plan, about what Google and Verizon might be cooking up that didn't violate net neutrality "technically," but still allowed Google to reach users faster. In the article, the author theorizes that Google maybe trying to reach a deal to get their self-contained data centers closer to Verizon customers by placing the shipping containers that Google uses at the Verizon data centers. Thus, creating a shorter distance between Google's servers and the search user, and because it doesn't "technically" violate pure net neutrality standards of each packet being treated equally, it's a WIN/WIN for all parties involved. Google wins by lowering data transfer costs and speeding search results to the user, Verizon wins by collecting additional fees for providing the power and connection to their network, and the Verizon customers win because they get their search results and YouTube videos faster without paying more for the service.

I personally think that this is an awesome idea, and I can not hardly believe that the big ISPs haven't thought of this before. I think that the big ISPs, or any ISP for that matter, should start setting up Amazon CloudFront style systems straight away. Can you imagine how much better watching a Netflix streaming video would be if there were only a few hops in the network between you and the server, instead of 18? Comcast, Verizon, Time Warner, and all the other big players should be doing this as fast as they can. They already have to maintain huge data centers with network switching hubs, why wouldn't they want to make some extra money off leasing space on their hardware to content providers?

These big ISPs should take it even further, they could provide a regionally-based DNS service for redirecting domains to the closest data center that served files for that domain. And further still, they could offer a regionally based Amazon EC2-esque system of cloud computing services at these regional data centers. I mean why haven't they jumped on the cloud computing bandwagon? They are in the best position possible for leveraging existing hardware and customer connectivity for providing the most efficient possible connection with potential consumers of ANY given internet product.

While these guys argue over whether or not net neutrality will stifle or enhance innovation in the world of internet connections, they are missing what could potentially be the next big money maker for their industry, and a product line that benefits everyone in line from the provider, to the business consumer, on down to us lowly individual customers. Maybe some big company like Amazon will catch on and make some sort of deal with the big ISPs that will allow them to fill the gap where the ISPs are failing.

Monday, September 6, 2010

CSS Transform Functions Need More Work

I wonder how many people writing CSS look at the transforms spec and wonder why it doesn't have an long-hand property names like "transform-scale" or "transform-rotate". It really reminds me of the old Microsoft specs for their Visual Filters. Is Apple really the new Microsoft? When did the W3C start changing the property arguments convention?

Don't get me wrong, I think the short-hand version is great and easy to use, but can we developers/designers get a little bit more control? So that we can override certain properties without having to redefine the entire transform? It's like having to redeclare the border-bottom-width in order to change the border-left-style.

Furthermore, why can't it support multiple origins for seperate functions? Why can't I declare "transform-rotate-origin: 0 0" and "transform-scale-origin: bottom right"? Is that too much to ask? I realize that at least in the WebKit version all the functions are combined into a single matrix, which probably creates quite the huge math headache for accomplishing multiple origin points on a transformation. But I am pretty confident that it could still be accomplished by not combining them all into a single matrix, but instead applying them sequentially. Though admittedly that may have some unexpected results on the transformation, so novices might wanted to avoid such an operation.

This all just might be my own little pet-peeves, but I pretty much feel the same way about the W3C Gradients spec too. I don't have any really good suggestions on how to fix this one, but it seems overly complicated to make changes to the gradient being used. Maybe they could try something like "background-image: linear-gradient()", then the designer/developer could define the properties of the gradient as CSS properties like "gradient-start: 0% red; gradient-end: 100% blue;" It's not a perfect solution I know, but it would be much easier to create subsequent gradients with small changes that don't require a redefinition of the entire gradient. Why would I want to do something like that you might ask? Maybe I have a list of items and I want to use a different color gradient on some of them, instead of redefining the entire gradient I would be able to update only a portion of the gradient.

Being a big fan of telling other people "It's open source, contribute!" I have looked around at Gecko and WebKit source code for how I could create a patch and submit it for adoption in the respective rendering engines, hoping to get my contribution accepted by all. But I must admit to being quite lost in such a large code base as Firefox and thoroughly confused by how to add it into WebKit as well. So if anyone wants to help me figure that out, or if you think it's a great idea and decide to take up my cause, I would greatly appreciate your generosity.

Tuesday, August 24, 2010

How to jQExtensions

UPDATE: jqextensions and the jQTouch photo gallery have been discontinued.

Recently I have received a few questions on how to perform certain tasks using jQExtensions and I thought it would be a good idea to highlight a few of them here in hopes that it will help more people out.

Setup

For the purpose of these examples #myphoto will reference a jQTouch Photo Gallery created by the following call

jQT.generateGallery("myphoto", [/*images*/], {/*options*/});

and #myscroll will reference a div with a vertical-scroll class like:

<div id="myscroll" class="vertical-scroll">

Hopefully that makes sense, let me know if it doesn't.

jQTouch Photo Gallery


Changing options after the gallery has been created


It's pretty simple really:

var options = $("#myphoto").data("jqt-photo-options");
options.slideDelay = 10000;

The options object will now be a reference to the options in use by the script, so any changes that you might make to this object take affect immediately in the execution of the gallery handlers.


jQT Scrolling


How to reset the scrolled element


$("#myscroll > div").trigger("reset");
That's it! Now the scrolled div will be returned to the initial position.


Add dynamic page with scrolling

If your application requires you to create "pages" dynamically in jQTouch, and you would like to still use the scroll extension on that dynamic page, just notify the extension that you have inserted a page like so:
$("body").trigger("pageInserted", {page: $("#my-new-page")});

More Questions

I will make every attempt to keep updating this list, if you have questions like this, feel free to leave a comment, and I will try to help if I can, thanks, hope you find it useful.

Tuesday, July 13, 2010

Prefix or PostHack, or Both

The entire debate about "Prefix or PostHack" is going down the wrong path. With PPK going so far as to suggest that vendor prefixes should be completely eliminated or simplified into a single extension (though he did recant a little), and Eric Meyer rebutting that CSS before vendor prefixes was effectively a living nightmare. The REAL problem is not whether or not vendor prefixes should be used, but HOW should they be used to enable the developer/designer to choose which implementation is best for their purposes using prefixes as a posthack.

The Problem with Vendor Prefixes

The problem with vendor prefixes lies in the fact that each vendor recognizes only their own prefixed property until the module has reached the Candidate Recommendation, and that CSS 2.1 prevents the vendors from introducing unprefixed properties that have not met these Candidate Recommendation requirements. This is the wrong approach, and undermines the strength of standardization removing the control from the developers and designers that are using the standard in day to day practice. The entire section should be rewritten to empower the developer/designer with more control over how they want the parser to treat their properties, making vendor prefixes the first class citizens and unprefixed properties the second class citizens of CSS.

How Vendor Prefixes should work

Using Eric's example of text-curl, if vendor X decides to implement this new property the unprefixed property name, text-curl, should be recognized by vendor X's CSS parser, but the parser should give precedence to the prefixed property name -x-text-curl, a sort of property name specificity. Thereby allowing other vendors to implement the exact same property name and vendor X's value format for said property in their parser. If the other vendor decides that they don't like vendor X's implementation of text-curl and choose to make changes to the format of the value for instance, then the the prefixed property -x-text-curl would still take precedence in vendor X's implementation. Giving the developer/designer the ability to decide which implementation that he/she preferred, or thought would make it to Candidate Recommendation, and use that value format for the argument of the unprefixed property name text-curl, and allow the developer/designer to override the unprefixed property name with the vendor prefixed version of the competing implementation.
h1{ text-curl: small or large; -x-text-curl: large; }
In this case the other vendors value is used for the default, and vendor X's value is used only by vendor X. Effectively giving the developer/designer control over how the property is used by each vendor, but also allowing for the fact that each vendor may choose to use the same format and therefore the developer can choose not to include the prefixed property name.
h1{ text-curl: small or large; }

Give the Developers/Designers more power to control layout in specific browsers

Conversely, if the developer/designer is not to willing to chance it one way or the other over which implementation might succeed in making it to Candidate Recommendation, the developer/designer would have the option of choosing to include both vendor's prefixed property names and the unprefixed property name if they so desired.
h1{ text-curl: small or large; -x-text-curl: large; -other-text-curl: small or large; }

Give Developers/Designers ALL the power to control layout in specific browsers

Moreover, I would go a step further giving all properties the ability to be overridden using a vendor prefix, even existing standard property names. Allowing developers/designers to target a specific parser with different values for a given property name and avoiding the laborious system of hacks designed for targeting specific browsers with different interpretations and implementations.
h1{ height: 30px; -moz-height: 40px; -webkit-height: 50px;}
instead of
h1{ height: 30px; }
@-moz-document url-prefix() { h1{ height: 40px; } }
@media all and (-webkit-min-device-pixel-ratio:1) { h1{ height: 50px; } }

And Let Developers/Designers choose when to exercise that power

In my opinion, the vendors, and the W3C, would be well served to create a way for developers/designers to target not just a specific browser, but also a specific version of that browser using the @media syntax in addition to the new media query supports.
@media screen and user-agent(Gecko-1.9.3, WebKit-533) { h1{ height: 45px; } }

Don't standardize speed, standardize quality

Finally, while I do agree with Eric's statement that the W3C should set a rule for promoting a module to Candidate Recommendation once two competing vendors have implemented the same module in a similar manner. I disagree with the idea that this will improve the quality in which the standards are developed. It will definitely improve the speed of standardization, but I am reluctant to place my faith in the ability of multiple vendors to agree on implementation simply for the sake of pushing it through the red tape when the agreed upon implementation may or may not be what is best for the standardization process or community as a whole. But the recommendation that I have presented here would make such a rule unnecessary, since the speed of standardization would no longer be a drastic concern and each developer/designer would have the ability to target specific browsers with specific rules.

In conclusion, empowering developers and designers with the ability to target a specific browser when they so choose would go great lengths towards satisfying the needs and desires of those people who are using these technologies and standards every day and would lessen the need for hastening the standardization process. Let us decide what is the best use case for our designs.

Thursday, July 8, 2010

Support for jQTouch Photo Gallery and Extensions

UPDATE: jqextensions and the jQTouch photo gallery have been discontinued.

Tuesday, June 15, 2010

jQTouch Photo Gallery

UPDATE: jqextensions and the jQTouch photo gallery have been discontinued.


I just uploaded a new Photo Gallery extension for jQTouch to the jqextensions project, along with a completely revamped version of the jQTouch inertia scrolling/sliding/fixed toolbar extension. It was quite an experience building the photo gallery extension, and the updates to the scrolling extension were pretty exciting for me.

New features in the scrolling extension include: an optional scrollbar and dynamically assigned dimensions for better support across more devices.

I will post a more complete account of each of these useful extensions in the coming weeks. In the mean time, try them out in this demo.

Saturday, February 20, 2010

Why you should use an autoload function in PHP

The loading of classes is something that managed languages like Java and C# don't need to worry about, class loaders are built into the compiler. But C/C++ programmers have always had to deal with the issue of accidentally including the same file into a build. They found an easy way around that by wrapping some includes in an
#ifndef CONSTANT
#include 'myfile.h'
#endif
and placing
#ifndef CONSTANT
#define CONSTANT
#endif
into 'myfile.h'. This is a good system for a compiled language that only needs to evaluate these expressions once at build time.

PHP doesn't use this method becuase it has the handy little include functions, include_once and require_once, that prevent you from loading the same file more than once, but unlike a compiled language, PHP re-evaluates these expressions over and over during the evaluation period each time a file containing one or more of these expressions is loaded into the runtime. That is where the Standard PHP Library (SPL), introduced in PHP 5, and the wonderful little _autoload function come in to enhance the speed and uniformity of your PHP code.

__autoload is a magic function, that you define, that enables PHP to let you know when it doesn't have a class loaded, but that class needs to be loaded.

If you define the __autoload function like so,
function __autoload ($classname)
{
    require('/path/to/my/classes/'.$classname.'.php');
}
you no longer need to add
require_once('/path/to/my/classes/MyClass.php');
into your files, because the first time that PHP encounters
$mine = new MyClass();
or
MyClass::staticMethodCall();
it will automatically call the __autoload function that you defined earlier.
__autoload('MyClass');
PHP doesn't do this EVERY time it encounters these calls, just the first time. Thus, you no longer need to add the require_once('/path/to/my/classes/MyClass.php'); to any files at all.

Why is __autoload a good thing?

The primary reason is that it improves the performance of your scripts by preventing PHP from checking if the file has already been loaded or not, like it does every time you call require_once or include_once. Moreover, you no longer have to load a class file just because you MIGHT need it during the execution of you script, because PHP will let you know if it is needed, when it is needed.

Of course, if you are sure that a class is not yet loaded, and that you will positively need that class during the execution of your script, you should by all means use the require() function to include your file. But from personal experience this is something that rarely happens among files that contain classes. For instance if you have a class that extends another class, you know for sure that the other class will be needed, but do you know for sure that it has not already been loaded? Usually not, because typically, you would be extending that parent class with at least one other child class. But I guess this is not always the case, so you should do what you think is best.

Advanced Usage

__autoload also makes it possible to change the include directory for a class based on some identifier in the class name
function __autoload ($classname)
{
    if (strstr('MyNamespace', $classname))
    {
         require('/some/other/path/to/my/classes/'.$classname.'.php');
    }
    else
    {
         require('/path/to/my/classes/'.$classname.'.php');
    }
}
or translate a class name into a file path location

function __autoload ($classname)
{
    //you could also replace '\\', if you are using namespacing in PHP 5.3 or greater
    require('/path/to/my/classes/'.str_replace('_', '/', $classname).'.php');
}
 There are even more techniques that can be used like changing file extensions and so on.

What if I need more than one __autoload function in my script?

One of the greatest things about SPL is that it provides a way to define more than one __autoload function using spl_autoload_register. If you already have an __autoload function you will need to register that function before registering any additional functions though.
spl_autoload_register('__autoload');
spl_autoload_register('my_other__autoload');
Of course if you do this, you will need to use the include function in you autoloaders instead of the require function, or check if a file exists in the expected path, otherwise the next function will never get called, because the runtime will encounter a fatal error. Additionally, spl_autoload_register accepts any 'callable' type variable, meaning that you can use a method from a class as an autoload function as well.
//for a static method
spl_autoload_register(array('MyAlreadyLoadedClass', 'autoloader'));
or
//and for an instatiated object method
spl_autoload_register(array($object, 'someAutoLoader'));

So, what if I just need a simple autoloader?

There is an awesome feature in SPL that allows you to tell PHP where to look for class files by default. Every time the PHP runtime encounters a class that is not yet loaded, it calls the spl_autoload function which in turn looks for files with the same name as the class that is supposed to be loaded in the include_path. It uses the file path extensions that are defined by the spl_autoload_extensions function, having .inc and .php set by default.

So, how do I use this to create a simple loader? If your classes are in files with the same name as the class name, and they are all in the same folder, simply add that folder to the include_path
set_include_path(get_include_path() . PATH_SEPARATOR . '/path/to/my/classes/');
Now every time PHP encounters a class that is not yet loaded, it calls spl_autoload, which looks through the files in each of the include_path folders for a file named MyClass.inc or MyClass.php. This method is slightly faster than the __autoload function, because it is native to the PHP runtime. And if you need to add a file path extension that you want spl_autoload to look for just call
spl_autoload_extensions(spl_autoload_extensions() . ',.class.php');
And spl_autoload will look for files that also end with .class.php.

SPL is chock full of goodies, but the autoload functionality is in my opinion one of the most useful additions that make SPL so useful. You are likely to find great performance increases by using these methods.

Sam Shull: PHP Programming Innovation Award of 2009

The PHP community at phpclasses.org has honored me with the PHP Programming Innovation Award of 2009. I am humbled that the international PHP community has shown such great appreciation for my endeavors and community contributions. I think that the hard work of Manuel Lemos at phpclasses has a very positive effect on the PHP community in providing a centralized place to share experiences and knowledge. If you haven't visited the site recently, I think that you will be pleasantly surprised by the robust diversity and ingenuity of many of the contributors.

Wednesday, February 17, 2010

Cross Browser ECMAScript for XML (E4X)

I have written a cross browser compatible implementation of E4X.

Well, when I say cross browser, I mean that it is capable of handling the methodical implementation of the specification, not the syntactical parts. Many of the key features that make E4X so useful are hard, if not impossible, to reproduce in other javascript engines. But what can be reproduced is the simplicity and namespace handling features that make it possible to create a more fluid handling of XML documents than the DOM specification provides. Additionally all the functionality has been tested in Internet Explorer, so now it's possible to handle namespaces in the browser that makes it so terribly difficult to accomplish.

http://code.google.com/p/xbe4x/

Thursday, February 11, 2010

Thoughts on XHP

I am really happy to hear that Facebook has now officially released XHP, an E4X type of implementation for PHP. My initial thoughts are that PHP was built as a templating engine, and so this innovative approach to templating will only help solidify PHP as the best templating engine for the web. But after reading Rasmus' take on the performance hit that you might sustain from using XHP, I'm not so sure.

In fairness I have not yet actually tried this extension out yet (it is an extension, not like HipHop). The amount of data that you would run through the XHP tags would presumably by fairly small amount of data. I am wondering if it is possible to say use the short tags to write something like:

<?=<b style={$style}>{$container}</b>?>

It might go against the whole "simpler to read" idea, but if ease of use is the goal, then I think that it would most certainly accomplish that. On the upside, this would also keep templates cachable, while enabling a fairly simple syntax to provide designers with. All in all, I think that I will just have to test it for myself.

Wednesday, February 10, 2010

Extensions to jQTouch

A few months back I discovered jQTouch by Dave Kaneda, which is a pretty amazing framework for building mobile WebKit based web apps. It's still in beta, but the animations are very robust, and it is easily extended. And so I set out to write a few extensions that would make jQTouch act more like a native iPhone app.

My initial goal was to write an extension that would enable the same type of horizontal scrolling of images that you see in Apple's App Store. But all I could find on the internet was a vertical scrolling div example by Matteo Spinelli (he has since updated the class to include multi-directional scrolling, sort of a drag feature). This appeared to emulate the 'flick-to-scroll' that makes iPhone apps so useful for mobile computing (IMHO), and had the added benefit of allowing the simulation of a fixed position toolbar. I integrated Matteo's scrolling div into an extension of jQTouch so that the functionality would appear seamless, and allow a designer/developer to simply add a few classes to their creation in order to implement the functionality.

But vertical scrolling wasn't what I was looking for, so I started fiddling with Matteo's script to see how it worked, and how I could alter it to enable horizontal scrolling/sliding. So, after figuring out that Matteo was using WebKit's -webkit-transform CSS property and the translate3d function to animate the changes expressed by a touchmove event, I figured, 'Why can't I just change that to use the translateX function instead?' And after hours of tinkering, I finally got that working too after adjusting the acceleration part of the scrolling divs touchend event.

While I was working on this however, I realized that Apple didn't just scroll right and left, they slid. So I created a different version of the same script that implemented a 'snapTo' method to find the next slide frame in the horizontal or vertical scrolling container. I also added a scaling gesture extension in hopes to one day implement a more robust photo gallery like the native iPhone photo gallery.

One day I will find the time to improve these extensions further, but for now you can find the source code and a few examples at the Google code project that I set up for them, I call it jqextensions.