Cheese Makers Forum FAQ Equipment part 1 Equipment part 2 History

Friday, March 12, 2010

Tech Talk: The Architectural Design of

Since there has been little to no interest in the guts of how my new webapp is architected, designed, and implemented, I'm going to talk at length about it anyway :D I think some of the design choices--learned from many, many failures as well as successes--are interesting and perhaps novel.

The site is written in a couple of different languages, the two main back end languages being Perl and XSL. There are two main helper modules (one called uiHelper, and the other sqlHelper) used by the main script, and the core goal of the Perl code is 1) database wrangling, 2) business logic, and 3) generating the XML 'UI'. I call it the UI even though it really isn't, it is an XML document describing all the data that the real UI generator--the XSL sheet--needs to generate the HTML. This means that all presentation code exists in XSL, and the scripting doesn't need to worry about anything other than sanitizing incoming data, dealing with MySQL, and appending XML to an in-memory document.

The database helper module is a wrapper for DBI, but it does a few other novel things. The first is it normalizes all responses from the database into an array of hashes. The second is it automatically pretty-prints things like dates and other types of data, and lastly it implements the ability to set callbacks on records that have been retrieved. It also understands how to talk to several databases at the same time, but I don't really need that feature for this project.

Using the module to talk to the DB is even simpler than using DBI. Let's say we have an object named $db that was connected to our database, one would simply:
my $results=$db->sql("select * from users;")

And you would end up with $results being populated with an array of records, which contained a hash of columns. I considered an iterator method, but for sake of simplicity you just use loops:
for (my $i=0; $i<@{$results}; $i++) {
foreach my $key(keys %{$results->[$i]}) {
print "$key: $results->[$i]->{$key}\n";

Even though it is super useful to have access to all the records of a request at the same time (i.e. Data::Dumper $results), the better looking way to access the records are with callbacks.

Before you issue your sql statement, simply issue:

sub myCallback {
my $ref=shift;
print Dumper $ref;

That subroutine will be called by each record with a reference to the specific record hash, and you can do anything you want with it. Since it is passing by reference and not value, you can do quick things like:

sub myCallback {
my $ref=shift;

print Dumper $results;

And you will see what you would expect, all the counts incremented. It also knows how to take care of packages, so your callback can be anywhere really.

This actually gets interesting when combined with the uiHelper object though, since it understands how to parse the results from the sqlHelper module. For instance, this method is one I use all the time:
$ui->appendSql("users", $db->sql("select user_key from users;");

This will populate the in memory XML doc with a structured list of all user_key's, or perhaps take this approach:
$ui->appendSql("users", $db->sql("select user_key, count from users;");

sub myCallback {
my $ref=shift;

And you have the modified record in the XML doc, waiting to be transformed by XSL into the actual HTML.

As to why I am not using an ORM/Mapper/Stored Procedures/Triggers/etc.: server execution time is cheap, my time is not. Using this kind of method means that 1) all the interfaces are consistent, 2) features can be added in minutes, not hours, and 3) code is virtually self-documented.

After the XML has been formed, it is sent off to be processed by Xalan and the XSL style sheet. This parses the XML, selects what parts are important, iterates through data, and builds the HTML. A typical section of XSL would look like:
<xsl:for-each select='myPlants'>
<b><xsl:value-of select='plantName'/></b>: <xsl:value-of select='plantCount'/><br/>

The beauty of this kind of system is all of the UI logic is completely removed from the business logic, and it all has a visual consistency since it is ultimately based on XML. Moving from SQL to Perl to HTML to Javascript to CSS is all very logical and not jarring, since they are all they own little ecosystems.

Other Services
In the background, quite a few other services are constantly running. They include:

  • KinoSearch, which creates full text searchable indexes of plants, calendars, comments, forums, and pictures

  • Email Queuer, which figures out what needs the attention of the users of the system, and queues them up for sending

  • The Emailer, which wakes up every ten or so minutes and batches the email queue up, so users only receive a single email with lots of items, instead of lots of emails

  • Weather Forecaster, which gathers near real time weather data so the system can make intelligent guesses on what needs to be done per plant

  • Average Weather Updater, which pulls data from several sources used to determine average plants/transplant/harvest dates based on location.

  • Site Denormalizer, which takes most of the important dynamic pages from the site and converts them to static HTML, for the benefit of web spiders such as Google.

I'll add more details later on plant family/class/instance/type/subtype hierarchies, and how they influence the overall predictions, but that will come later. Please let me know if there are questions/comments/insults/whatever, and have fun making cheese!

No comments:

Post a Comment

Creative Commons License
Cheese A Day by Jeremy Pickett is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.
Based on a work at