Saturday, December 19, 2009

qooxdoo

  • About
    qooxdoo (pronounced [’ku:ksdu:]) is a comprehensive and innovative framework for creating desktop-style web applications, often called rich internet applications (RIAs). Leveraging object-oriented JavaScript allows developers to build impressive cross-browser applications. No HTML, CSS nor DOM knowledge is needed. qooxdoo includes a platform-independent development tool chain, a state-of-the-art GUI toolkit and an advanced client-server communication layer. It is Open Source under an LGPL/EPL dual license.
  • Framework
    qooxdoo is entirely class-based and tries to leverage the features of object-oriented JavaScript. It is fully based on namespaces and does not extend native JavaScript types to allow for easy integration with other libraries and existing user code. Most modern browsers are supported (e.g. Firefox, Internet Explorer, Opera, WebKit/Safari) and it is free of memory leaks. It comes with a comprehensive API reference, that is auto-generated from Javadoc-like comments and from the syntax tree representing the code. The fast and complete JavaScript parser not only allows doc generation, but is an integral part of the automatic build process that makes optimizing, compressing, linking and deployment of custom applications very user-friendly. Internationalization and localization of applications for various countries and languages is a core feature and easy to use. more ...
  • GUI Toolkit
    Despite being a pure JavaScript framework, qooxdoo is quite on par with GUI toolkits like Qt or SWT when it comes to advanced yet easy to implement user interfaces. It offers a full-blown set of widgets that are hardly distinguishable from elements of native desktop applications. Full built-in support for keyboard navigation, focus and tab handling and drag & drop is provided. Dimensions can be specified as static, auto-sizing, stretching, percentage, weighted flex or min/max or even as combinations of those. All widgets are based on powerful and flexible layout managers which are a key to many of the advanced layout capabilities. Interface description is done programmatically in JavaScript for maximum performance.

    No HTML has to be used and augmented to define the interface. The qooxdoo developer does not even have to know CSS to style the interface. Clean and easy-to-configure themes for appearance, colors, borders, fonts and icons allow for a full-fledged styling that even supports runtime switching.

  • Ajax
    While being a client-side and server-agnostic solution, the qooxdoo project does include complete implementations of RPC servers (currently Java, PHP, Perl, Python) to demonstrate some of its advanced client-server communcation. An abstract transport layer supports queues, timeouts and implementations via XMLHttpRequest, Iframes and Scripts. Like the rest of qooxdoo it fully supports event-based programming which greatly simplifies asynchronous communication.
http://qooxdoo.org/

Sunday, December 13, 2009

Dojo 1.4 is out with Significant Improvements to Performance, Stability, and Features

Key updates mentioned are:

  • IO Pipeline topics
  • dojo.cache
  • dojo.contentHandlers
  • dojo.hash with native HTML5 onhashchange event support where available
  • Traversal and manipulation for NodeLists (the return value for dojo.query)
  • dojo.ready (easier to type than dojo.addOnLoad)
  • Hundreds of refinements to the Dijit API and collection of Dijits, and a few new widgets in DojoX
  • DataChart widget and other improvements to charting
  • dojox.drawing lands!
  • Editor improvements and new plug-ins in both Dijit and DojoX
  • Grid is faster, and the EnhancedGrid lands!
  • ForestStoreModel for the TreeGrid
  • GFX improvements
  • dojox.jq, a very experimental module aimed at trying to match the jQuery API as close as possible, but using Dojo underneath
  • Dojo build system optionally supports the Google Closure Tools compiler
  • Significant speed improvements, especially in IE
http://docs.dojocampus.org/releasenotes/1.4

Tuesday, December 8, 2009

The New Clearfix Method

The clearfix hack, or “easy-clearing” hack, is a useful method of clearing floats.
The original clearfix hack works great, but the browsers that it targets are either obsolete or well on their way. Specifically, Internet Explorer 5 for Mac is now history, so there is no reason to bother with it when using the clearfix method of clearing floats.
The original clearfix hack looks something like this:
.clearfix:after {
visibility: hidden;
display: block;
font-size: 0;
content: " ";
clear: both;
height: 0;
}
.clearfix { display: inline-table; }
/* Hides from IE-mac \*/
* html .clearfix { height: 1%; }
.clearfix { display: block; }

/* End hide from IE-mac */

Yes it’s ugly, but it works very well, enabling designers to clear floats without hiding overflow and setting a width or floating (nearly) everything to get the job done. The logic behind this hack goes something like this:

  • Target compliant browsers with the first declaration block (if all browsers were standards-compliant, this would be the only thing needed) and create a hidden clearing block after the content of the target element.
  • The second declaration applies an inline-table display property, exclusively for the benefit of IE/Mac.
  • At this point, we use the comment-backslash hack to hide the remainder of the rules from IE/Mac. This enables us to do the following:
  • Apply a 1% height only to IE6 to trigger hasLayout (which is required for the hack to work)
  • Re-apply display:block to everything except IE/Mac
  • The last line is a comment that serves to close the hack for IE/Mac

As you can see, that’s a lot of fuss over a browser that has been dead for at least the last three or four years. Nobody uses IE/Mac anymore, so it is time to drop it from the clearfix hack. The result is a much cleaner and more efficient slice of CSS:

/* new clearfix */
.clearfix:after {
visibility: hidden;
display: block;
font-size: 0;
content: " ";
clear: both;
height: 0;
}
* html .clearfix { zoom: 1; } /* IE6 */


*:first-child+html .clearfix { zoom: 1; } /* IE7 */

Stripping out that IE/Mac cruft cleans things up real nice. Notice that we have further improved the clearfix hack by adding support for IE7. Neither IE6 nor IE7 support the :after pseudo-class used in the first declaration, so we need an alternate method of applying the clearfix. Fortunately, applying zoom:1 to either browser triggers IE’s proprietary hasLayout mechanism, which works just fine to clear the float. For expediency’s sake, we accomplish this with a couple of valid browser-specific selectors, but you should be advised that conditional comments are the recommended way to go.

Fortunately, IE8 supports the :after pseudo-class, so this new clearfix method will only become more simplified as IE6 and, eventually, IE7 finally die off.

Bottom line: The new clearfix method applies clearing rules to standards-compliant browsers using the :after pseudo-class. For IE6 and IE7, the new clearfix method triggers hasLayout with some proprietary CSS. Thus, the New Clearfix method effectively clears floats in all currently used browsers without using any hacks.

http://perishablepress.com/press/2009/12/06/new-clearfix-hack

Underscore.js

Underscore is a utility-belt library for JavaScript that provides a lot of the functional programming support that you would expect in Prototype.js (or Ruby), but without extending any of the built-in JavaScript objects. It's the tie to go along with jQuery's tux.

Underscore provides 60-odd functions that support both the usual functional suspects: map, select, invoke — as well as more specialized helpers: function binding, javascript templating, deep equality testing, and so on. It delegates to built-in functions, if present, so JavaScript 1.6 compliant browsers will use the native implementations of forEach, map, filter, every, some and indexOf.

A complete Test & Benchmark Suite is included for your perusal.

The unabridged source code is available on GitHub.

http://documentcloud.github.com/underscore/

Sunday, December 6, 2009

NOSQL Patterns

Over the last couple years, we see an emerging data storage mechanism for storing large scale of data. These storage solution differs quite significantly with the RDBMS model and is also known as the NOSQL. Some of the key players include ...
  • GoogleBigTable, HBase, Hypertable
  • AmazonDynamo, Voldemort, Cassendra, Riak
  • Redis
  • CouchDB, MongoDB
These solutions has a number of characteristics in common
  • Key value store
  • Run on large number of commodity machines
  • Data are partitioned and replicated among these machines
  • Relax the data consistency requirement. (because the CAP theorem proves that you cannot get Consistency, Availability and Partitioning at the the same time)
API model

The underlying data model can be considered as a large Hashtable (key/value store).

The basic form of API access is
  • get(key) -- Extract the value given a key
  • put(key, value) -- Create or Update the value given its key
  • delete(key) -- Remove the key and its associated value
More advance form of API allows to execute user defined function in the server environment
  • execute(key, operation, parameters) -- Invoke an operation to the value (given its key) which is a special data structure (e.g. List, Set, Map .... etc).
  • mapreduce(keyList, mapFunc, reduceFunc) -- Invoke a map/reduce function across a key range.

Machines layout

The underlying infratructure is composed of large number (hundreds or thousands) of cheap, commoditized, unreliable machines connected through a network. We call each machine a physical node (PN). Each PN has the same set of software configuration but may have varying hardware capacity in terms of CPU, memory and disk storage. Within each PN, there will be a variable number of virtual node (VN) running according to the available hardware capacity of the PN.


Data partitioning (Consistent Hashing)

Since the overall hashtable is distributed across many VNs, we need a way to map each key to the corresponding VN.

One way is to use
partition = key mod (total_VNs)

The disadvantage of this scheme is when we alter the number of VNs, then the ownership of existing keys has changed dramatically, which requires full data redistribution. Most large scale store use a "consistent hashing" technique to minimize the amount of ownership changes.


In the consistent hashing scheme, the key space is finite and lie on the circumference of a ring. The virtual node id is also allocated from the same key space. For any key, its owner node is defined as the first encountered virtual node if walking clockwise from that key. If the owner node crashes, all the key it owns will be adopted by its clockwise neighbor. Therefore, key redistribution happens only within the neighbor of the crashed node, all other nodes retains the same set of keys.


Data replication

To provide high reiability from individually unreliable resource, we need to replicate the data partitions.

Replication not only improves the overall reliability of data, it also helps performance by spreading the workload across multiple replicas.


While read-only request can be dispatched to any replicas, update request is more challenging because we need to carefully co-ordinate the update which happens in these replicas.

Membership Changes

Notice that virtual nodes can join and leave the network at any time without impacting the operation of the ring.

When a new node joins the network
  1. The joining node announce its presence and its id to some well known VNs or just broadcast)
  2. All the neighbors (left and right side) will adjust the change of key ownership as well as the change of replica memberships. This is typically done synchronously.
  3. The joining node starts to bulk copy data from its neighbor in parallel asynchronously.
  4. The membership change is asynchronously propagate to the other nodes.

Notice that other nodes may not have their membership view updated yet so they may still forward the request to the old nodes. But since these old nodes (which is the neighbor of the new joined node) has been updated (in step 2), so they will forward the request to the new joined node.

On the other hand, the new joined node may still in the process of downloading the data and not ready to serve yet. We use the vector clock (described below) to determine whether the new joined node is ready to serve the request and if not, the client can contact another replica.

When an existing node leaves the network (e.g. crash)
  1. The crashed node no longer respond to gossip message so its neighbors knows about it.
  2. The neighbor will update the membership changes and copy data asynchronously

We haven't talked about how the virtual nodes is mapped into the physical nodes. Many schemes are possible with the main goal that Virtual Node replicas should not be sitting on the same physical node. One simple scheme is to assigned Virtual node to Physical node in a random manner but check to make sure that a physical node doesn't contain replicas of the same key ranges.

Notice that since machine crashes happen at the physical node level, which has many virtual nodes runs on it. So when a single Physical node crashes, the workload (of its multiple virtual node) is scattered across many physical machines. Therefore the increased workload due to physical node crashes is evenly balanced.


Client Consistency

Once we have multiple copies of the same data, we need to worry about how to synchronize them such that the client can has a consistent view of the data.

There is a number of client consistency models
  1. Strict Consistency (one copy serializability): This provides the semantics as if there is only one copy of data. Any update is observed instantaneously.
  2. Read your write consistency: The allows the client to see his own update immediately (and the client can switch server between requests), but not the updates made by other clients
  3. Session consistency: Provide the read-your-write consistency only when the client is issuing the request under the same session scope (which is usually bind to the same server)
  4. Monotonic Read Consistency: This provide the time monotonicity guarantee that the client will only see more updated version of the data in future requests.
  5. Eventual Consistency: This provides the weakness form of guarantee. The client can see an inconsistent view as the update are in progress. This model works when concurrent access of the same data is very unlikely, and the client need to wait for some time if he needs to see his previous update.

Depends on which consistency model to provide, 2 mechanisms need to be arranged ...
  • How the client request is dispatched to a replica
  • How the replicas propagate and apply the updates
There are various models how these 2 aspects can be done, with different tradeoffs.

Master Slave (or Single Master) Model

Under this model, each data partition has a single master and multiple slaves. In above model, B is the master of keyAB and C, D are the slaves. All update requests has to go to the master where update is applied and then asynchronously propagated to the slaves. Notice that there is a time window of data lost if the master crashes before it propagate its update to any slaves, so some system will wait synchronously for the update to be propagated to at least one slave.

Read requests can go to any replicas if the client can tolerate some degree of data staleness. This is where the read workload is distributed among many replicas. If the client cannot tolerate staleness for certain data, it also need to go to the master.

Note that this model doesn't mean there is one particular physical node that plays the role as the master. The granularity of "mastership" happens at the virtual node level. Each physical node has some virtual nodes acts as master of some partitions while other virtual nodes acts as slaves of other partitions. Therefore, the write workload is also distributed across different physical node, although this is due to partitioning rather than replicas

When a physical node crashes, the masters of certain partitions will be lost. Usually, the most updated slave will be nominated to become the new master.

Master Slave model works very well in general when the application has a high read/write ratio. It also works very well when the update happens evenly in the key range. So it is the predominant model of data replication.

There are 2 ways how the master propagate updates to the slave; State transfer and Operation transfer. In State transfer, the master passes its latest state to the slave, which then replace its current state with the latest state. In operation transfer, the master propagate a sequence of operations to the slave which then apply the operations in its local state.

The state transfer model is more robust against message lost because as long as a latter more updated message arrives, the replica still be able to advance to the latest state.

Even in state transfer mode, we don't want to send the full object for updating other replicas because changes typically happens within a small portion of the object. In will be a waste of network bandwidth if we send the unchanged portion of the object, so we need a mechanism to detect and send just the delta (the portion that has been changed). One common approach is break the object into chunks and compute a hash tree of the object. So the replica can just compare their hash tree to figure out which chunk of the object has been changed and only send those over.

In operation transfer mode, usually much less data need to be send over the network. However, it requires a reliable message mechanism with delivery order guarantee.


Multi-Master (or No Master) Model

If there is hot spots in certain key range, and there is intensive write request, the master slave model will be unable to spread the workload evenly. Multi-master model allows updates to happen at any replica (I think call it "No-Master" is more accurate).

If any client can issue any update to any server, how do we synchronize the states such that we can retain client consistency and also eventually every replica will get to the same state ? We describe a number of different approaches in following ...

Quorum Based 2PC

To provide "strict consistency", we can use a traditional 2PC protocol to bring all replicas to the same state at every update. Lets say there is N replicas for a data. When the data is update, there is a "prepare" phase where the coordinator ask every replica to confirm whether each of them is ready to perform the update. Each of the replica will then write the data to a log file and when success, respond to the coordinator.

After gathering all replicas responses positively, the coordinator will initiate the second "commit" phase and then ask every replicas to commit and each replica then write another log entry to confirm the update. Notice that there are some scalability issue as the coordinator need to "synchronously" wait for quite a lot of back and forth network roundtrip and disk I/O to complete.

On the other hand, if any one of the replica crashes, the update will be unsuccessful. As there are more replicas, chance of having one of them increases. Therefore, replication is hurting the availability rather than helping. This make traditional 2PC not a popular choice for high throughput transactional system.

A more efficient way is to use the quorum based 2PC (e.g. PAXOS). In this model, the coordinator only need to update W replicas (rather than all N replicas) synchronously. The coordinator still write to all the N replicas but only wait for positive acknowledgment for any W of the N to confirm. This is much more efficient from a probabilistic standpoint.

However, since no all replicas are update, we need to be careful when reading the data to make sure the read can reach at least one replica that has been previously updated successful. When reading the data, we need to read R replicas and return the one with the latest timestamp.

For "strict consistency", the important condition is to make sure the read set and the write set overlap. ie: W + R > N


As you can see, the quorum based 2PC can be considered as a general 2PC protocol where the traditional 2PC is a special case where W = N and R = 1. The general quorum-based model allow us to pick W and R according to our tradeoff decisions between read and write workload ratio.

If the user cannot afford to pick W, R large enough, ie: W + R <= N, then the client is relaxing its consistency model to a weaker one.

If the client can tolerate a more relax consistency model, we don't need to use the 2PC commit or quorum based protocol as above. Here we describe a Gossip model where updates are propagate asynchronous via gossip message exchanges and an auto-entropy protocol to apply the update such that every replica eventually get to the latest state.

Vector Clock


Vector Clock is a timestamp mechanism such that we can reason about causal relationship between updates. First of all, each replica keeps vector clock. Lets say replica i has its clock Vi. Vi[i] is the logical clock which if every replica follows certain rules to update its vector clock.
  • Whenever an internal operation happens at replica i, it will advance its clock Vi[i]
  • Whenever replica i send a message to replica j, it will first advance its clock Vi[i] and attach its vector clock Vi to the message
  • Whenever replica j receive a message from replica i, it will first advance its clock Vj[j] and then merge its clock with the clock Vm attached in the message. ie: Vj[k] = max(Vj[k], Vm[k])

A partial order relationship can be defined such that Vi > Vj iff for all k, Vi[k] >= Vj[k]. We can use these partial ordering to derive causal relationship between updates. The reasoning behind is
  • The effect of an internal operation will be seen immediately at the same node
  • After receiving a message, the receiving node knows the situation of the sending node at the time when the message is send. The situation is not only including what is happening at the sending node, but also all the other nodes that the sending node knows about.
  • In other words, Vi[i] reflects the time of the latest internal operation happens at node i. Vi[k] = 6 reflects replica i has known the situation of replica k up to its logical clock 6.
Notice that the term "situation" is used here in an abstract sense. Depends on what information is passed in the message, the situation can be different. This will affect how the vector clock will be advanced. In below, we describe the "state transfer model" and the "operation transfer model" which has different information passed in the message and the advancement of their vector clock will also be different.

Because state is always flow from the replica to the client but not the other way round, the client doesn't have an entry in the Vector clock. The vector clock contains only one entry for each replica. However, the client will also keep a vector clock from the last replica it contacts. This is important for support the client consistency model we describe above. For example, to support monotonic read, the replica will make sure the vector clock attached to the data is > the client's submitted vector clock in the request.


Gossip (State Transfer Model)

In a state transfer model, each replica maintain a vector clock as well as a state version tree where each state is neither > or < style="font-weight: bold; font-style: italic;">query time, the client will attach its vector clock and the replica will send back a subset of the state tree which precedes the client's vector clock (this will provide monotonic read consistency). The client will then advance its vector clock by merging all the versions. This means the client is responsible to resolve the conflict of all these versions because when the client sends the update later, its vector clock will precede all these versions.


At update, the client will send its vector clock and the replica will check whether the client state precedes any of its existing version, if so, it will throw away the client's update.


Replicas also gossip among each other in the background and try to merge their version tree together.

Gossip (Operation Transfer Model)

In an operation transfer approach, the sequence of applying the operations is very important. At the minimum causal order need to be maintained. Because of the ordering issue, each replica has to defer executing the operation until all the preceding operations has been executed. Therefore replicas save the operation request to a log file and exchange the log among each other and consolidate these operation logs to figure out the right sequence to apply the operations to their local store in an appropriate order.

"Causal order" means every replica will apply changes to the "causes" before apply changes to the "effect". "Total order" requires that every replica applies the operation in the same sequence.

In this model, each replica keeps a list of vector clock, Vi is the vector clock the replica itself and Vj is the vector clock when replica i receive replica j's gossip message. There is also a V-state that represent the vector clock of the last updated state.

When a query is submitted by the client, it will also send along its vector clock which reflect the client's view of the world. The replica will check if it has a view of the state that is later than the client's view.


When an update operation is received, the replica will buffer the update operation until it can be applied to the local state. Every submitted operation will be tag with 2 timestamp, V-client indicates the client's view when he is making the update request. V-@receive is the replica's view when it receives the submission.

This update operation request will be sitting in the queue until the replica has received all the other updates that this one depends on. This condition is reflected in the vector clock Vi when it is larger than V-client


On the background, different replicas exchange their log for the queued updates and update each other's vector clock. After the log exchange, each replica will check whether certain operation can be applied (when all the dependent operation has been received) and apply them accordingly. Notice that it is possible that multiple operations are ready for applying at the same time, the replica will sort these operation in causal order (by using the Vector clock comparison) and apply them in the right order.


The concurrent update problem at different replica can also happen. Which means there can be multiple valid sequences of operation. In order for different replica to apply concurrent update in the same order, we need a total ordering mechanism.

One approach is whoever do the update first acquire a monotonic sequence number and late comers follow the sequence. On the other hand, if the operation itself is commutative, then the order to apply the operations doesn't matter

After applying the update, the update operation cannot be immediately removed from the queue because the update may not be fully exchange to every replica yet. We continuously check the Vector clock of each replicas after log exchange and after we confirm than everyone has receive this update, then we'll remove it from the queue.

Map Reduce Execution

Notice that the distributed store architecture fits well into distributed processing as well. For example, to process a Map/Reduce operation over an input key list.

The system will push the map and reduce function to all the nodes (ie: moving the processing logic towards the data). The map function of the input keys will be distributed among the replicas of owning those input, and then forward the map output to the reduce function, where the aggregation logic will be executed.


Handling Deletes

In a multi-master replication system, we use Vector clock timestamp to determine causal order, we need to handle "delete" very carefully such that we don't lost the associated timestamp information of the deleted object, otherwise we cannot even reason the order of when to apply the delete.

Therefore, we typically handle delete as a special update by marking the object as "deleted" but still keep its metadata / timestamp information around. Around a long enough time that we are confident that every replica has marked this object deleted, then we garbage collected the deleted object to reclaim its space.


Storage Implementaton

One strategy is to use make the storage implementation pluggable. e.g. A local MySQL DB, Berkeley DB, Filesystem or even a in memory Hashtable can be used as a storage mechanism.

Another strategy is to implement the storage in a highly scalable way. Here are some techniques that I learn from CouchDB and Google BigTable.

CouchDB has a MVCC model that uses a copy-on-modified approach. Any update will cause a private copy being made which in turn cause the index also need to be modified and causing the a private copy of the index as well, all the way up to the root pointer.

Notice that the update happens in an append-only mode where the modified data is appended to the file and the old data becomes garbage. Periodic garbage collection is done to compact the data. Here is how the model is implemented in memory and disks

In Google BigTable model, the data is broken down into multiple generations and the memory is use to hold the newest generation. Any query will search the mem data as well as all the data sets on disks and merge all the return results. Fast detection of whether a generation contains a key can be done by checking a bloom filter.

When update happens, both the mem data and the commit log will be written so that if the machine crashes before the mem data flush to disk, it can be recovered from the commit log.

http://horicky.blogspot.com/2009/11/nosql-patterns.html

Sunday, November 29, 2009

Swfobject helper

This Rails plugin makes including an swf object easier, no more writing javascript for that, just plain ruby.

To install this plugin:
  script/plugin install git://github.com/japetheape/swfobject_helper.git
Make sure swfobject.js is in your javascripts dir (download here:http://code.google.com/p/swfobject/)

http://agilewebdevelopment.com/plugins/swfobject_helper_plugin

Check your scripts with JSLint on Rails

Here's how you use it:

  1. Make sure you have Java installed (5.0 or later) – it's required to run Rhino.
  2. Install the plugin:
    ./script/plugin install git://github.com/psionides/jslint_on_rails.git
  3. Run the rake task:
    rake jslint

Voila :) That's all you need to check your JS code. You will get a result like this (if everything goes well):

Running JSLint:

checking public/javascripts/Event.js... OK
checking public/javascripts/Map.js... OK
checking public/javascripts/Marker.js... OK
checking public/javascripts/Reports.js... OK

No JS errors found.

If you've messed up something, you will get such results instead:

Running JSLint:

checking public/javascripts/Event.js... 2 errors:

Lint at line 24 character 15: Use '===' to compare with 'null'.
if (a == null && b == null) {

Lint at line 72 character 6: Extra comma.
},

checking public/javascripts/Marker.js... 1 error:

Lint at line 275 character 27: Missing radix parameter.
var x = parseInt(mapX);

Found 3 errors.
rake aborted!
JSLint test failed.

http://psionides.jogger.pl/2009/11/23/check-your-scripts-with-jslint-on-rails/

Saturday, November 28, 2009

Rails 2.3.5

Rails 2.3.5 changes:
  1. Minor Bug Fixes and deprecation warnings
  2. Ruby 1.9 Support
  3. Fix filtering parameters when there are Fixnum or other un-dupable values
  4. Improvements to ActionView::TestCase
  5. Compatiblity with the rails_xss plugin
http://github.com/rails/rails/tree/v2.3.5

Thursday, November 26, 2009

WebROaR

Ruby Application Server
  • Dead simple Ruby on Rails™ Application Deployment
  • 5 to 55% faster than other deployment stacks
  • Admin Panel with run time performance numbers
  • Exception Notifications
  • Free & Open Source Software
Features
  • Simplified deployment with maximum performance
  • Runs Ruby on Rails™ as well as other Rack compliant applications
  • Run multiple applications simultaneously
  • Intelligent load balancing
  • Dynamically reap stuck Ruby processing instances
  • Provides run time performance data for the deployed applications
  • Generates notifications in case exceptions occur in any of the deployed applications
Download
  • WebROaR can be downloaded and installed using the following commands:

    gem sources -a http://gems.github.com
    sudo gem install webroar
    sudo webroar install

  • Or if one likes living in the fast lane, the edge version can be installed using the following commands:

    git clone git://github.com/webroar/webroar
    cd webroar
    sudo rake install
http://webroar.in

Wednesday, November 25, 2009

Creating images from unicode text using rmagick

To create images from the unicode text, we can use the "encoding" method for the Draw object.
Example:
require "RMagick"
def show_textimg
bg = Magick::Image.new(120,20){self.background_color = "#9E9E9E"}
text = Magick::Draw.new
text.encoding = "Unicode"
text.text(23,14,"ドの半角⇔全角")
text.draw(bg)
bg.write "#{RAILS_ROOT}/public/images/text.jpg"
end
http://thinkingrails.blogspot.com/2009/08/creating-images-from-unicode-text-using.html

Tuesday, November 24, 2009

Amp

About Amp

Amp is a general-purpose version-control system. It currently implements Mercurial, and we hope to support git, bazaar, svn, cvs, and darcs in the future. Why? Well, that leads us to the question: "Why Amp?"

Amp is NOT:

Amp does not define a repository format, and most likely never will.

Then why make a VCS?

Amp exists because there's plenty of excellent repository formats out there, but none of them are truly good software. We chose Mercurial as our first VCS to implement because it comes closest to what we feel is a solid user experience, and that's what we're building upon.

Amp exists to make VCS work for you. Want to add your own commands? Write a few lines of code. Want to use git's commands on a Mercurial repository, switches and all? Amp is working on it. Our goal is to produce a piece of software that lets you forget that you're working on git project one moment and a Mercurial project the next.

Amp's Features

  • Workflows - customizable command sets (e.g. git's commands, svn's commands)
  • Commands - work with your VCS on your terms
  • Ampfiles - tweak amp's settings for a specific repository with one file
  • Want more features? Help develop Amp! We've got a lot planned!
http://amp.carboni.ca/

Sunday, November 22, 2009

Diagnose and Prevent AJAX Performance Issues!

AJAX improves user experience by moving more code to the browser. Frameworks accelerate development, but lead to opaque application behavior and new performance issues.
dynaTrace AJAX Edition aims to solve these issues:

  • Understand performance as real users experience it
  • Differentiate between browser or server bottlenecks
  • Trace asynchronous JavaScript executions for the full round-trip
  • Analyze JavaScript, AJAX remoting, network and rendering performance in real-time
  • Save performance data for interactive offline analysis
  • Transform Selenium/Watir tests into performance tests and integrate them with your CI environment
http://ejohn.org/blog/deep-tracing-of-internet-explorer/
http://ajax.dynatrace.com/pages/

Wednesday, November 18, 2009

Redmine

Redmine is a flexible project management web application. Written using Ruby on Rails framework, it is cross-platform and cross-database.

Redmine is open source and released under the terms of the GNU General Public License v2 (GPL).

Overview

  • Multiple projects support
  • Flexible role based access control
  • Flexible issue tracking system
  • Gantt chart and calendar
  • News, documents & files management
  • Feeds & email notifications
  • Per project wiki
  • Per project forums
  • Time tracking
  • Custom fields for issues, time-entries, projects and users
  • SCM integration (SVN, CVS, Git, Mercurial, Bazaar and Darcs)
  • Issue creation via email
  • Multiple LDAP authentication support
  • User self-registration support
  • Multilanguage support
  • Multiple databases support
http://www.redmine.org/

nested relations in ActiveRecord

class State < ActiveRecord::Base
has_many :cities
end

class City < ActiveRecord::Base
has_many :streets
belongs_to :state
end

class Street < ActiveRecord::Base
has_many :houses
belongs_to :city
end

class House < ActiveRecord::Base
belongs_to :street
end
How do you get all of the Houses in virginia?

Well, you could do this:

virginia.cities.collect{|c| c.streets}.flatten.uniq.collect{|s| s.houses}.flatten.uniq
... but that's epically lame. It looks like shit and produces an metric ton of queries and as a result is highly inefficient.

How about this:

House.find(
:all,
:include => {:street => {:city => :state}},
:conditions => {'states.id' => virginia.id}
)

This is a single query with joins where appropriate, and it's a finder on House which is what you're getting anyways. Makes sense, no?

http://joshsharpe.com/archives/56

Understanding the MySQL forks

But in the distributed revision control world we live in, it's never really that simple. Here are some other notes:

  • XtraDB is an InnoDB fork and "mostly" only a storage engine. It has a future in MariaDB and Drizzle, but may not make it into MySQL (about to be owned by Oracle, who also owns InnoDB).
  • OurDelta collects more patches than just the Percona patches, and enables the PBXT storage engine. There's also a repository of OurDelta for 5.1 - binaries just aren't out yet.
  • Like XtraDB, PBXT has a future in Drizzle, MariaDB and *maybe* also MySQL (Oracle may be more friendly towards PBXT than XtraDB, but Oracle is not known to be friendly so who knows!). I left PBXT absent from the diagram for not having much of a lineage with previous MySQL releases (flames welcome if you disagree).
  • XtraDB draws a lot from the Google patches for MySQL/InnoDB (not present on the diagram either).
  • Drizzle is now incompatible with everything else in MySQL-land terms (replication, partitioning, etc), and they're happy to be. In storage engine terms, they're still compatible though, which leads me to describe Drizzle as a completely new "userland", with storage engine options still remaining very similar.
  • MariaDB is both a new storage engine (Maria) and a userland "delta" of MySQL. I call it a delta since it is trying to keep binary-format compatibility with MySQL wherever it can. This means that it makes a good retrofit, but it limits what they can do without changing things like the .frm format, etc.
  • The InnoDB plugin has a weird history. Oracle announced it as an optional replacement for the 5.1 InnoDB but not much has become of it since introduction. I would have expected 5.4 to be based off the plugin, but it's not (at least at this point).
  • MySQL is the real loser in the direction the patches are moving. While all other forks are free to share amongst themselves (where compatible) and take from MySQL, MySQL will only accept a patch if the author signs a Contributor's License Agreement. They need to do this - otherwise they can't sell OEM copies, which still make a large chunk of sales. Until recently, MySQL didn't accept any InnoDB patches unless they came downstream from Oracle, and Oracle keeps InnoDB development as a closely guarded secret - which makes influencing it very difficult.
http://mtocker.livejournal.com/50931.html

Tuesday, November 17, 2009

JSGI

JSGI is a web server interface specification for JavaScript, inspired by Ruby’s Rack (http://rack.rubyforge.org/) and Python’s WSGI (http://www.wsgi.org/). It provides a common API for connecting JavaScript frameworks and applications to webservers.

Jack is a collection of JSGI compatible handlers (connect web servers to JavaScript web application/frameworks), middleware (intercept and manipulate requests to add functionality), and other utilities (to help build middleware, frameworks, and applications).

http://jackjs.org/

Narwhal

Narwhal is a cross-platform, multi-interpreter, general purpose JavaScript platform. It aims to provide a solid foundation for building JavaScript applications, primarily outside the web browser. Narwhal includes a package manager, module system, and standard library for multiple JavaScript interpreters. Currently Narwhal’s Rhino support is the most complete, but other engines are available too.

Narwhal’s standard library conforms to the ServerJS standard. It is designed to work with multiple JavaScript interpreters, and to be easy to add support for new interpreters. Wherever possible, it is implemented in pure JavaScript to maximize reuse of code among engines.

Combined with Jack, a Rack-like JSGI compatible library, Narwhal provides a platform for creating server-side JavaScript web applications and frameworks such as Nitro.

http://narwhaljs.org/

Sunday, November 15, 2009

DB Charmer – ActiveRecord Connection Magic Plugin

DbCharmer is a simple yet powerful plugin for ActiveRecord that does a few things:

  1. Allows you to easily manage AR models’ connections (switch_connection_to method)
  2. Allows you to switch AR models’ default connections to a separate servers/databases
  3. Allows you to easily choose where your query should go (on_* methods family)
  4. Allows you to automatically send read queries to your slaves while masters would handle all the updates.
  5. Adds multiple databases migrations to ActiveRecord

Thursday, November 12, 2009

OurDelta

OurDelta produces enhanced builds for MySQL 5.0 and builds for MariaDB 5.1, on common production platforms. James Purser of Open Source On The Air describes OurDelta as “a new distribution for MySQL”.

http://ourdelta.org/

Firebug HTTP Time Monitor

Firebug now uses a component called http-activity-distributor that allows to register a listener and get notifications about various phases of each network request (DNS Lookup, connecting, sending, etc.). The most important thing is that one of the parameters passed to the listener is a time-stamp. This is something what was missing till now.

Having the time-stamp is critical since Javascript code (and Firebug is entirely implemented in Javascript) is executed on Firefox UI thread. In case when the UI is blocked by time expensive operation (e.g. DOM rendering, script execution, etc.) any event sent to a Javascript handler (and so, handled on the UI thread) can be delayed. So, getting the time-stamp withing JS handler can produce impaired time results.

Firebug Net panel now fixes this problem and the timing info is correct. See couple of examples I have analyzed when testing with a nice online tool called Cuzillion developed by Steve Souders.

Inline Scripts Block

Inline scripts block downloads and any resources (e.g. images) below an inline script don't get downloaded until the script finishes execution.

Let's imagine a page with following structure.

<html>
<head></head>
<body>
<img src="resource.cgi" />
<script> {an inline script running for 5 sec } </script>
<img src="resource.cgi" />
</body>
</html>
Here is a timeline of such page displayed in Firebug.

Inline Scripts Block

The first image resource.cgi starts downloading immediately after the page itself is downloaded. The second image resource.cgi starts downloading after 5 sec delay caused by the inline script. Try the example online.

The green area represents connecting time an the purple area represents waiting for response time. The blue line shows when DOMContentLoaded event was fired and the red line is associated with page load event.

Connection Limit

Firefox limits the number of connections per server to 6. If the limit is reached other requests wait in an internal queue. This value can be changed by editing network.http.max-persistent-connections-per-server preference in about:config

See how the timeline looks like for a page that loads 8 images.

Firefox Limits the number of connections

See the last two images. These are waiting in a queue (light brown area) till there is a free connection. The 7th image starts downloading as soon as the first image finishes and share the same connection (see, there is no green connection time). Similarly, the 8th image starts when the 2nd is completed. Try the example online.

http://www.softwareishard.com/blog/firebug/firebug-http-time-monitor/

Firebug 1.5: XHR Breakpoints

Create XHR Breakpoint

In order to create a XHR breakpoint, the Net panel offers the same breakpoint bar that is already well known from the Script panel.

Breakpoint bar within the Net panel.

So, creating a new breakpoint for XHR request is as easy as clicking on the appropriate row with the request the user wants to debug.

As soon as the breakpoint is there and a request to the same URL is executed by the current page again, Firebug halt's Javascript execution at the source line where the request has been initiated.

Javascript execution halted on XHR Breakpoint

You can see two things on this screenshot. First, there is no breakpoint in the source code so, you don't have to know the source code line in advance to start debugging and second, the breakpoint is listed in the Breakpoints side panel in a new section called XHR Breakpoints.

Set Breakpoint Condition

If you want to halt the Javascript execution only in specific cases you can specify a condition. Again, this is done in the same way as specifying condition for regular Javascript breakpoint - just right click on the breakpoint circle .

Condition editor for XHR breakpoints

The condition editor allows to specify a Javascript expression that if evaluated to true activates the breakpoint. The condition is evaluated in a scope with some built-in variables that can be used in your expression.

Couple of examples:

Halt JS execution only if URL parameter count is present and equal to 1.
URL: http://www.example.com?count=1
Expression: count == 1

Halt JS execution only if posted data contains string key.
URL: http://www.example.com?count=1
Expression: $postBody.indexOf("key") >= 0

See online demo here.

http://www.softwareishard.com/blog/firebug/firebug-15-xhr-breakpoints/

Firebug 1.5: Break On Next

The entire feature is represented by a new Break On button that is available in Firebug's main toolbar.

Break On Next button in Firebug's UI

The logic of the button depends on the panel that is currently selected in Firebug. For instance, if the Script panel is selected and the feature activated (by clicking on the button that starts throbbing Break On Next Activated), the debugger halts as soon as the next line of Javascript is executed. So, this is the moment when you have to e.g. press a button in your web app and see what code is actually called.

Here is how the button works in other Firebug panels (the button is disabled for panels, which doesn't support this feature):

  • Script: Break on the next executed script.
  • HTML: Break on HTML mutation (element addition or removal and attribute change).
  • Net: Break on XMLHttpRequest execution.
  • Console: Break on Javascript error.
http://www.softwareishard.com/blog/firebug/firebug-15-break-on-next/

Wednesday, November 11, 2009

Closure Compiler

The Closure Compiler is a tool for making JavaScript download and run faster. It is a true compiler for JavaScript. Instead of compiling from a source language to machine code, it compiles from JavaScript to better JavaScript. It parses your JavaScript, analyzes it, removes dead code and rewrites and minimizes what's left. It also checks syntax, variable references, and types, and warns about common JavaScript pitfalls.

http://code.google.com/closure/compiler/
http://code.google.com/closure/compiler/docs/error-ref.html

Monday, November 9, 2009

Tooltip.js

jQuery:
tip.init(object) #tooltip will be taken from object "helper" attribute
img src="..." helper="..."
tip.init('img[helper]')
var tip = {
init : function(e){
$(e).bind("mouseenter", this.createTip);
$(e).mouseleave(function(){$("#tip").remove()});
$(e).click(function(){$("#tip").remove()});
$(e).mousemove(function(e){$("#tip").css({left:e.pageX+30,
top:e.pageY-16})});
},
createTip : function(e) {
var obj=$(e.currentTarget),
title=$.trim(obj.attr("helper"));
if(title.length>0){
return $("#tip").length === 0 ?
$("<div>").html("<span>"+$.trim(obj.attr("helper"))+"</span>").
attr("id","tip").
css({left:e.pageX+30,
top:e.pageY-16,
position:'absolute',
border:'1px solid #FFE222',
background:'#FFFBC2',
color:'#514721',
padding:'5px 10px',
textTransform:'lowercase',
fontVariant:'small-caps',
zIndex:9000}).
appendTo("body") : null;
} else {return false}
}
};
Prototype:
tip.createTip(container,object) #tooltip will be taken from object "title" attribute
<div id="container">
<img src="..." title="..." />
</div>
tip.createTip($('container'),'img[title]')
var tip = {
title : '',
opacity: .8,
marginX: 30,
marginY: -46,
position : function(i,e){
return i.setStyle({left:e.clientX+this.marginX+'px',
top:e.clientY+this.marginY+'px'})
},
enter : function(i){
var tip=this;
$(i).observe(antHill.events.menter,function(e){
var div,obj=$(e.currentTarget);
tip.title=obj.readAttribute("title");
obj.writeAttribute("title",'');
if($("tip")===null){
div=new Element('div',{id:'tip'});
div.addClassName('tooltip').
setOpacity(tip.opacity).
innerHTML="<span>"+tip.title+"</span>";
tip.position(div,e);
$($$('body')[0]).insert({bottom:div});
}
Event.stop(e)}.bindAsEventListener($(i)));
},
leave : function(i){
$(i).observe(antHill.events.mleave,function(e){
$("tip").remove();
$(i).writeAttribute("title",tip.title);
Event.stop(e)}.bindAsEventListener($(i)));
},
move : function(i){
$(i).observe(antHill.events.mmove,function(e){
antHill.tip.position($("tip"),e);
Event.stop(e)}.bindAsEventListener($(i)));
},
init : function(i){
this.enter(i);
this.leave(i);
this.move(i);
},
createTip : function(c,i){c.select(i).
each(function(i){tip.init(i)})}
};

Thursday, November 5, 2009

AutoMySQLBackup

A script to take daily, weekly and monthly backups of your MySQL databases using mysqldump. Features - Backup mutiple databases - Single backup file or to a seperate file for each DB - Compress backup files - Backup remote servers - E-mail logs ...

http://sourceforge.net/projects/automysqlbackup/

7 Free Tools to Minify your Scripts and CSS

  1. JSMin (JavaScript Minifier) - removes comments and unnecessary whitespace from JavaScript files
  2. JSO (JavaScript Optimizer) - allows you to manage your JavaScript and CSS resources and to reduce the amount of data transfered between the server and the client.
  3. Packer – An online JavaScript Compressor
  4. JSCompress.com – Online tool that uses either JSMin or Packer to compress your files
  5. CSS Compressor – Online tool that compresses your CSS file
  6. DigitalOverload JavaScript Minifier – Online tool that minifies your JavaScript files
  7. YUI Compressor – A JavaScript minifier designed to be 100% safe and yields a higher compression ratio than most other tools.
http://www.devcurry.com/2009/11/7-free-tools-to-minify-your-scripts-and.html

Sunday, November 1, 2009

Caja

Caja allows websites to safely embed DHTML web applications from third parties, and enables rich interaction between the embedding page and the embedded applications. It uses an object-capability security model to allow for a wide range of flexible security policies, so that the containing page can effectively control the embedded applications' use of user data and to allow gadgets to prevent interference between gadgets' UI elements.

Today, some websites embed third-party code using iframes. This approach does not prevent a wide variety of attacks: redirection to phishing pages which could pretend to be a login page for the embedding application; stopping the browser from working until the user downloads malware; stealing history information about which sites a user has visited so that more target phishing attacks can be done; and port scanning the user's local network. Finally, even though a website can choose not to give data to an iframe app, once it has done so it can place no further restrictions on what the iframe app can do with it — it cannot stop the iframe app from sending that data elsewhere.

Caja addresses these problems which are not addressed by iframe jails; and it does so in a very flexible way. If a container wishes to allow an embedded application to use a particular web service, but not to send arbitrary network requests, then it can give the application an object that interacts with that web service, but deny access to XMLHttpRequest. Under Caja, passing objects grants authority, and denying access to objects denies authority, as is typical in an object-capability environment. Information leakage can be prevented by allowing user data to be encapsulated in objects that can be rendered in user-readable form but not read by scripts ; we can prevent leakage without solving the problem of covert channels.

http://code.google.com/p/google-caja/

Tuesday, October 13, 2009

Accordion.js

accordion.init params (accordion.init(5,1,"ul.menu","slow"):
  • 5: opened section index (numeric)
  • 1: autohide (bool)
  • "ul.menu": container element
  • "slow": speed
On each page can be rendered "#navid" parameter (section name) to focus setting.

HTML
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="accordion.js"></script>
<script type="text/javascript">
$(document).ready(function(){accordion.init(5,1,"ul.menu","slow")});
</script>
</head>
<body>
<ul class="menu">
<li rel="category">
<h3 rel="title">parent</h3>
<ul level="0">
<li rel="category">
<h3 rel="title">category 1</h3>
<ul level="1">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
<li class="sub_items"><a href="#">link 3</a></li>
</ul>
</li>
<li rel="category">
<h3 rel="title">category 2</h3>
<ul level="1">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
</ul>
</li>
<li rel="category">
<h3 rel="title">category 2</h3>
<ul level="1">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
</ul>
</li>
</ul>
</li>
<
li rel="category">
<h3 rel="title">parent</h3>
<ul level="0">
<li rel="category">
<h3 rel="title">category 1</h3>
<ul level="1">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
<li class="sub_items"><a href="#">link 3</a></li>
</ul>
</li>
<li rel="category">
<h3 rel="title">category 2</h3>
<ul level="1">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
</ul>
</li>
<li rel="category">
<h3 rel="title">parent</h3>
<ul level="1">
<li rel="category">
<h3 rel="title">category 1</h3>
<ul level="2">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
<li class="sub_items"><a href="#">link 3</a></li>
</ul>
</li>
<li rel="category">
<h3 rel="title">category 2</h3>
<ul level="2">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
</ul>
</li>
<li rel="category">
<h3 rel="title">category 2</h3>
<ul level="2">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
</ul>
</li>
</ul>
</li>
<li rel="category">
<h3 rel="title">category 2</h3>
<ul level="1">
<li class="sub_items"><a href="#">link 1</a></li>
<li class="sub_items"><a href="#">link 2</a></li>
</ul>
</li>
</ul>
</li>
</ul>
<span id="navid" style="display:none">category 1</span>
</body>
<
/html>

Javascript
$.fn.equals = function(compareTo) {
if (!compareTo||!compareTo.length||this.length!=compareTo.length){return false}
for(var i=0;i return true;
}
var menu = function accordion(l,h,c,s){
var accordion = {
init : function(){
this.autohide=h;
this.speed=s;
this.parents=[];
this.container=$(c);
this.focus=$("#navid");
this.initMenu(l)},
categories : function(){return this.getH3(this.getLi(this.container))},
getLi : function(e){return $(e).find("li[rel=category]")},
getH3 : function(e){return $(e).find("h3[rel=title]")},
close : function(e){$(e).find("h3:first").addClass("close")},
findParents: function(p){
var c=this.container,a=this.parents;
$.each(p.parents("ul"),function(){
if($(this).equals(c)){return}
else {a.push($(this))}})
return a},
up : function(p,e,h){
var u=p.find("ul");
h?u.hide():u.slideUp(this.speed);
this.getH3(p).removeClass("close")},
down : function(p,e){
var b=$.browser.msie&&$.browser.version=='7.0',
o=this.findParents(p),
s=this.speed,
u=p.parent().find("ul[level="+e.attr("level")+"]"),
h=$.grep(u,function(a){return a!=e[0]});
b?e.show(s):e.slideDown(s);
if(this.autohide)$.each(h,function(){accordion.up($(this).parent())});
this.close(p);
$.each(o, function(){
accordion.close($(this).parent());
b?$(this).show(s):$(this).slideDown(s)})},
anime : function(t,e,h){this[t](e.parent(),e,h)},
setFocus : function(i) {
var f=this.focus,t=f.text();
if(f.length>0){$.each(this.categories(),function(k,v){if($(v).text()==t)i=k})}
return i},
initMenu : function(i,init){
var p,e=this.container.find("ul");
if(i==-1)this.setEvents(e);
if(init){e=$(e[i]);e.is(':visible')?this.anime("up",e):this.anime("down",e)}
else{this.setEvents(e);i=this.setFocus(i);this.anime("down",$(e[i]))}},
setEvents : function(e){
var h=this.categories();
h.bind("click",function(e){accordion.initMenu(h.index($(e.target)),1);});
this.anime("up",e,1)}
}
return accordion.init();
}

Thursday, October 8, 2009

iPhone & Rails

ObjectiveResource is an Objective-C port of Ruby on Rails' ActiveResource. It provides a way to serialize objects to and from Rails' standard RESTful web-services (via XML or JSON) and handles much of the complexity involved with invoking web-services of any language from the iPhone.

http://iphoneonrails.com/

Wednesday, October 7, 2009

Cross-Domain Ajax with Flash

flXHR is a *client-based* cross-browser, XHR-compatible tool for cross-domain Ajax (Flash) communication. It utilizes an invisible flXHR.swf instance that acts as sort of a client-side proxy for requests, combined with a Javascript object/module wrapper that exposes an identical interface to the native XMLHttpRequest (XHR) browser object, with a few helpful additions and a couple of minor limitations (see the documentation for more details).

flXHR requires plugin v9.0.124 for security reasons. See the documentation for a configuration flag "autoUpdatePlayer" which will attempt to automatically inline update the plugin if need be.

The result is that flXHR can be used as a drop-in replacement for XHR based Ajax, giving you consistent, secure, efficient cross-domain client-to-server cross-domain Ajax communication, without messy workarounds such as IFRAME proxies, dynamic script tags, or server-side proxying.

flXHR brings a whole new world of cross-domain Ajax and API-consistency to any browser with Javascript and Flash Player plugin v9+ support (Adobe claims Flash has 99% browser support now External Link). No other methods or workarounds can claim that kind of wide-spread support or consistency. In addition, flXHR boasts the ability to be dropped-in to many different Javascript frameworks (Dojo, Prototype, jQuery, etc) for even easier and more robust Ajax usage.

http://flxhr.flensed.com/

Thursday, October 1, 2009

An Introduction to JavaScript’s “this”

JavaScript is an amazing little language, but it’s got some quirks that turn a lot of people off. One of those quirks is this, and how it’s not necessarily what you expect it to be. this isn’t that complicated, but there are very few explanations of how it works on the internet. I find myself constantly re-explaining the concept to those who are new to JavaScript development. This article is an attempt to explain how this works and how to use it properly.

http://justin.harmonize.fm/index.php/2009/09/an-introduction-to-javascripts-this/

BugMash

Have you ever wondered how you could get started contributing to the core Rails code? Have you been watching the growth of RailsBridge and wondering where you could fit in? Well, wonder no longer: we have an answer to both of those questions. Announcing:

The First Rails and RailsBridge BugMash

The idea is simple: RailsBridge has a lot of energy. The Rails Lighthouse has a lot of open tickets. With the help of some Rails Core team members, we're going to see what we can do to cut down the number of open tickets, encourage more people to get involved with the Rails source, and have some fun.

  1. Confirm that the bug can be reproduced
  2. If it can't be reproduced, try to figure out what information would make it possible to be reproduced
  3. If it can be reproduced, add the missing pieces: better repro instructions, a failing patch, and/or a patch that applies cleanly to the current Rails source
  4. Bring promising tickets to the attention of the Core team

Some of the Bridgers will be organizing a face-to-face way for BugMash participants to come together (Teams), but there's no need to be there to be a part of it. We'll also have a room open on IRC, and people who are familiar with the Rails internals will be available to help point you in the right direction. We're going to do everything we can to make it easy to start contributing to Rails.

We'll be adding more details to this bare outline over the coming week, including a checklist of what you can do to get ready to work in the Rails source and details on a scoring system and rewards for the most active participants. For now, though, there are two things for you to do:

  1. Reserve at least a chunk of that weekend to roll up your sleeves and work on the BugMash
  2. Speak up if you can contribute prizes, familiarity with the Rails source, or other help to the project.

Official BugMash hours

Rails contributors are located all over the world, so we're going to define an extended weekend for the BugMash. So we're going to run from Saturday noon in New Zealand (00:00:00 September 26 GMT) to Sunday midnight on the US West coast (07:00:00 September 28 GMT). That should give everyone who wants to be involved plenty of time to participate.

Resources
http://railsbridge.org/
http://wiki.railsbridge.org/projects/railsbridge/wiki/BugMash

YUI 3 Is Out!

This is a ground-up redesign of YUI:

  1. Selector-driven: YUI 3 is built around one of the lightest, fastest selector engines available, bringing the expressive power of the CSS selector specification into actions that target DOM nodes.
  2. Syntactically terse: Without polluting the global namespace, YUI 3 supports a more terse coding style in which more can be accomplished with less code.
  3. Self-completing: YUI 3’s light (6.2KB gzipped) seed file can serve as the starting point for any implementation. As long as this seed file is present on the page, you can load any module in the library on the fly. And all modules brought into the page via the built-in loader are done so via combo-handled, non-blocking HTTP requests. This makes loading the library safe, easy and fast.
  4. Sandboxed: YUI modules are bound to YUI instances when you use() them; this protects you against changes that might happen later in the page’s lifecycle. (In other words, if someone blows away a module you’re using after you’ve created your YUI instance, your code won’t be affected.)
http://ajaxian.com/archives/yui-3-is-out
http://developer.yahoo.com/yui/3/

Tuesday, September 15, 2009

Ruby on Rails application with Adobe Flex & RestfulX Framework

A simple tutorial on how to create a Ruby on Rails application with Adobe Flex and the RestfulX Framework in 10 minutes and 10 steps.