You know the problem… “We’d like users to be able to authenticate using Facebook so they don’t have to create an account on our system.”

Here at Secret Source Ltd we do lots and lots of WordPress development and lately, nearly every project has required some sort of Facebook integration, the ability to log in with your Facebook account in particular. Fortunately there are multiple plugins to help you with that. We’re quite fond of the Nextend Facebook Connect plugin as it is very easy to configure and includes clear instructions on how to create a Facebook app and get everything working like a charm.

For me, though, there has always been one particular point of pain that none of these plugins have figured out how to work around: unless you want to be responsible for a Facebook app for your client’s logins, your client needs to give you THEIR Facebook credentials so you can login AS THEM to create and configure the Facebook login app.

However, I recently figured out a way around this. I know I may be late to the party but I’ve been unable to find this approach documented anywhere, or maybe I just didn’t search hard enough.

The approach is as follows:

  1. Log in to the Facebook Developer portal as yourself.
  2. Create a new app and configure it for your client, using their logo, domain, and everything.
  3. Go to Roles -> Administrators -> Add Administrators and add your client as an administrator. Note that you and your client do not need to be friends in order for you to add them, but you will need to know either their Facebook ID or Facebook “username”.

Once your client accepts your invitation to be an administrator, they can then go to the Roles tab and remove YOU from the list of administrators. The app is now THEIRS and your job is done! Note that if needed, the client can always add you back as an administrator.

This, to me, seems like a very reasonable way to have set this up and I can, for once, say I am quite happy with Facebook’s developer tools.

System Configuration

I have a CentOS 7 host machine with two KVM guests:

  1. Smoothwall 3.1
  2. Ubuntu 14.04

The Smoothwall is configured RED+GREEN+ORANGE with three separate network interfaces. The RED and GREEN interfaces are connected via bridged Macvtap with virtio drivers to two separate host interfaces (enp5s0 and enp6s0) which, in turn, are connected to two physical network cards on the machine. ORANGE is connected to a guest-only virtual switch. RED is connected to another network / gateway that connects the Smoothwall to the Internet, same as the host interface on the same card (but with different IP addresses – the RED is static, entered manually and the host is dynamic – DHCP).

After a restart of the host, all machines, host and guests, have all of the expected network connectivity, and everything just works.

The Problem

Where I live we experience periodic Internet outages lasting from a second or two to several minutes. Every time there is an outage the RED interface loses all connectivity, both inbound and outbound, once connectivity to the host has been restored (and possibly during the outage).

Following the loss of connectivity, in the host Settings -> Network control panel the interface associate with RED (Macvtap0), which is normally visible and editable, is still visible but it’s options cannot be changed as the Options button is missing. Also, the ON/OFF button is missing too. Furthermore, it no longer has anything more than a MAC address (normally it has both a MAC and an IPV6 address).

Control Panel Before Outage

Macvtap working properly

Control Panel After Outage

Macvtap failing

I’ve tried changing the driver (virtio, rtl8139, e1000, etc.) and I’ve tried changing the physical hardware but neither change has altered the behavior so I’m pretty sure it’s something in the Macvtap software that’s failing (or misconfigured).

The Solution?

There are multiple articles and bug reports that seem to indicate that this can be repaired by enabling promiscuous mode on the interface, but I tried that (sudo ifconfig Macvtap0 promisc) to no avail. Maybe I needed to toggle the interface (ifup or something like that?)

I’m new to WordPress development and recently found myself needing to create a secondary query using WP_Query. The Codex has a pretty good introduction to querying WordPress using WP_Query but no where, not even in the new Developer documentation, do they say that you MUST call the_post() inside the loop. If you fail to call the_post(), you may get the error “Fatal error: Maximum function nesting level of ‘100’ reached, aborting!”

To the documentation team’s credit, they do point out that if you fail to call the_post(), many template tags will fail to function as expected. Also, in a few places they do mention that the_post() increments the internal counter by one. It makes me wonder why have_posts() doesn’t just call the_post() internally and be done with it…

I lost hours trying to figure this out. And just for the sake of completeness, here is a full example of what WordPress needs to properly execute a loop as a secondary query (and it matters not that this is against a custom post type):

$my_query = new WP_Query( $args );

while( $my_query->have_posts() ) {

$my_query->the_post(); // does nothing visual, just sets some variables and advances the internal pointer

the_title(); // note that we do NOT use $my_query->the_title(), I have no idea why not…

}

wp_reset_postdata(); // reset the main post variable, $wp_the_post to the main query

And it should be noted that all those posts about the problem being XDebug are all just wrong. The problem is not the limit placed on the number of nested function calls (100 seems far, far more than adequate) but the fact that you’ve reach 100 in the first place.

I sincerely hope this helps others in the same situation.

Although I’ve been using XSLT on a variety of projects for nearly 10 years now, I’m still stuck using XSLT version 1.0 only processors and frequently turn to “keys” to solve complex grouping problems. A few weeks ago I was presented with a grouping problem that put my knowledge to the test and forced me to fully understand how keys work and how best to set them up. Thanks to the fabulous people on the XSL mailing list, I was given lots of valuable feedback and pointed in the right direction. I feel it’s only fair to share what I learned: the USE attribute is the XPath to the node whose data you want to use as the key (the “grouper” if you will). Furthermore, the XPath is relative to the ELEMENT in the MATCH attribute.

Normally it’s as easy as specifying an attribute to use as the key but let’s consider an example in which all the elements are the same and the only thing that uniquely identifies them is their location relative to each other. Consider a table with the following structure (TD elements with nothing below them have ROWSPANs set):

td td td td td td
td td td td td td
      td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
td td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
      td td td td
   td td td td td
td td td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
      td td td td
      td td td td

In a KEY element, you identify the source element you wish to capture (TR in this case) in the MATCH attribute. In the USE attribute you specify an XPath statement indicating the node the matched element will belong to (be grouped by). This is fairly straight forward when the input is well defined, but in a situation like ours, where structure is the only thing we can key off of, and the structure itself is somewhat amorphous, it can be quite difficult to write the proper XPath.

Let’s say the first row is the header row. It should not be included in the output as it simply contains labels for the columns. Rather than accounting for it in the key element, we’ll simply skip it when applying templates (select="//tr[position() > 1]").

The key (or group name) for the matched elements will come from rows containing 6 or more TD elements. Specifically, the data will come from the first TD element in those rows. The XPath would be something like this ancestor-or-self::tr[count(td) >= 6]/td[1]. Unfortunately, this code only groups rows in which there are 6 or more TD elements. Rows with 5 or less TD elements are left out of the result set. For rows with 5 or less TD elements we will need to look up in document order and stop at the first row above them containing 6 or more TD elements. This is where it gets complicated… This could be solved with some sort of IF-THEN-ELSE construct but since we’re using XSLT, that’s not the best approach.

Instead, we’re going to capture ALL the potential keys above the current row and filter out the ones we don’t need.

To “look up” we use preceding-sibling: ancestor-or-self::tr/preceding-sibling::tr[count(td) >= 6]. This will give us ALL the rows with 6 TD elements preceding the currently matched row. However, we only want the row [with 6 TD elements] that immediately precedes the currently matched row, and not all of the preceding siblings. Thus we append: [position() = count(.)] which gives us the last item in the set (not exactly sure why last() doesn’t work here, but it doesn’t) followed by the first TD element in the row: /td[1].

Finally we filter out the nodes we don’t need. We do this by joining the statements via the pipe character | and enclosing the whole thing in parentheses, and from that result set we take the very last element: [last()], which is exactly the key we are looking for. Here is the final key element:

 

<xsl:key 
	name="ports-by-ship" 
	match="tr" 
	use="(ancestor-or-self::tr[count(td) &gt;= 6]/td[1] 
		| ancestor-or-self::tr/preceding-sibling::tr[count(td) &gt;= 6][position() = count(.)]/td[1])[last()]" 
/>

 

Because it’s hard to see the result of such a complex XPath, I first run the transformation using a template match on TR elements and copy-of the results to the output. That way I can see what my XPath is actually producing. Once I’ve got the set I’m looking for, I move it into a key element.

They say you don’t really know something until you can explain it to someone else. I’m not sure if I’ve succeeded in explaining it or not, but I feel like I’m much, much closer than I was when I was presented with this problem a few weeks ago.

My XML input.

My XML output.

My XSLT.

Just a quick note for those experiencing the same issue. After a fresh install of an LTSP server from the Ubuntu 10.10 (Maverick Meerkat) alternate CD I was unable to connect from any of the thin clients. I kept getting a TFTP timeout (but DHCP was clearly working).

After checking all the variables mentioned in this article, I discovered that the filename for pxelinux.0 in /etc/ltsp/dhcpd.conf ended in .tmp as in: filename "/ltsp/i386/pxelinux.0.tmp";. I don’t know if this is a bug in the installation program or what, but removing “.tmp” worked like a charm and everything is now up and running, and I’m thrilled!

After 7 years of working as a Web Developer remotely from the island of Gran Canaria (and nearly 20 years in some IT related position), I started teaching IT to high school students here in the Canary Islands. Working with teens has been an eye-opener, to say the least…

More than 50% of my students had never used email and had never heard of Netiquette at the start of the school year. Although the curriculum from prior years included the creation of PowerPoint presentations, writing blogs, and modifying HTML, not one student knew how to set a margin or a tab in a word processing application. I was agahst! How could such gaps in basic IT knowledge be tolerated? Where was the curriculum designer? Who gave all these kids email addresses without making them take (and pass) a test on Netiquette first?

To their credit, what they did learn (creating videos, for example) they learned pretty well. Nevertheless, in the business world (and for the foreseeable future) formal business communication (contracts, proposals) takes place in writing, not video, and via email, not via Tuenti. Furthermore, these students, moreso than those who came before, absoultely MUST master computer mediated communication if they ever hope to succeed in their careers.

For these reasons I decided to conduct a series of interviews with some of my former (and present) clients, co-workers, and related software developers. In these interviews we discuss a variety of aspects of working remotely. Most of the people I spoke with coincided on one thing in particular: being able to express yourself clearly, in writing, is the deciding factor of whether or not to work with you. One of the interviewees put it this way: “I am going to quickly look for ways to eliminate 95% of [the resumes that cross my desk].” Expressing yourself poorly in writing makes you a likely target for elimination and this series of interviews is intended to drive that point home.

Now that I’ve edited down the videos and watched them all myself, I’m surprised how consistently the following themes came up:

  • There must be trust between both parties, but it’s not that hard to achieve.
  • Expressing yourself clearly and effectively in writing is crucial to your success.
  • Most problems that arise are the result of a lack of trust.

The café where I recorded most (but not all) of these interviews was my favorite corner café here in Las Palmas: Coffee Break.

The interviews that follow have been edited down to fit within the 15 minute maximum allowed by YouTube.com, but there was a lot of great stuff left on the cutting room floor… Click the names of each person to watch the video and enjoy!

Following my upgrade to Snow Leopard, Apache started producing segfaults for virtually any request. I tried reinstalling the Entroyp.ch PHP package (which doesn’t work with Snow Leopard), commenting LoadModule php5_module in /etc/apache2/httpd.conf, and a host of other things but the thing that ended up solving the problem for me was:

  1. Be sure to use the PHP that comes with Snow Leopard (leave LoadModule php5_module uncommented – don’t use the entroyp.ch package, sorry Marc!)
  2. Comment out LoadModule dav_svn_module /opt/subversion/lib/svn-apache/mod_dav_svn.so

The bottom line is, any modules that were not built against the current (Snow Leopard) version of Apache will probably cause some sort of segfault.

Since the version of PHP that comes with Snow Leopard may be missing some of your favorite extensions, here’s a link to some instructions on how to include them (untested by me): Making Snow Leopard’s PHP 5.3.0 usable in the real world

I sure hope this saves someone the 3 hours of pointless poking around that I lost this morning!

A new version of the PHP BBEdit Clipping Set is available for download (for free) immediately:

http://tedmasterweb.com/php-bbedit-clipping-set/

HIGHLIGHT: The new set contains more than 9,200 clippings (that’s about 3,000 more than the previous version).

Changes in this version
==================

– all clippings (optionally) conform as closely as possible to the Zend/PEAR style guides

– removed hundreds of duplicate clippings (mostly constants)

– removed “cruft” (primarily from Snippets and Control Structures)

– reorganized clippings into more logical folder structures

– based this set on a very recent version of the manual

– the set now includes more than 9,200 functions, constants, methods, properties, snippets, control structures and more

– added additional “interactive” functionality to some date functions (I could never remember exactly which switches to use when formatting dates)

– the set now includes class methods and properties

– renamed the clipping set to just “PHP.php” (except for the “Loose” version, see page for details)

+++

I hope you enjoy and be sure to let me know if you find any bugs or have any requests for improvement!

Sincerely,

Ted Stresen-Reuter
http://tedmasterweb.com

I’ve heard many stories about people making lots of money via Google’s AdWords and AdSense programs. Most people make this income via AdSense Arbitrage: buying AdWords for less than the income generated by the AdSense ads appearing on your site. You pocket the difference.

I don’t believe 1% of what I hear so I decided to research these claims. While researching I stumbled on Spyfu which offers a list of the most expensive keywords being used in AdWords. The astute reader familiar with Google’s AdWords and AdSense programs will immediately recognize the arbitrage opportunity.

Living in a vacation paradise, having a large selection of potential “stock” photos, knowing something about SEO, and having just received a gift certificate for 50€ in AdWords, I decided to try a little experiment. According to Spyfu.com, “hotels”, “travel”, “flights”, “rental”, “vacations”, “holidays” are among the higher paying keywords that one would associate with my collection of stock photos.

The goal is/was to spend (invest) the 50€ on AdWords at rates lower than what other advertisers are paying to have their ads appear on my site, and then hope that users either buy a license for my photographs or click on an advertisement. This is classic “arbitrage” but avoids breaking Google’s AdSense guidelines because the primary goal is to sell licenses and give users some pretty pictures to look at.

I set up a “stock photography” subsite on tedmasterweb.com with lots of pictures of landscapes here in the islands, in other parts of Europe and one of my dad’s farm outside Chicago. Nothing complicated, but fine tuned for SEO, ease of use, and directed at potential stock photo buyers or anyone who likes to look at pretty pictures of places they are going to visit.

At first the ads appearing on the site were all for financial-related things (like masters degrees and stock trading systems). I changed the subdirectory from “stock” to “stock-photography” and the ads, thankfully, changed to reflect more travel and graphic arts related offers. I also added a AdSense placement on the landing page (rather than just on the enlargement pages). These two changes improved my “conversions”. In just a couple of days I had 3 clicks on ads (whereas prior to the 50€ I went months without a single click!). This great article on another AdSense Arbitrage Experiment corroborates my findings (money can be made, but you’re likely going to lose at first).

In the end, as you might have guessed if you have any experience at all in this field, the experiment proved what any reasonable person would assume: My ROI on this so far is a loss of 48€ (plus the several hours I put in to setting everything up, which I could have billed at 70€/hour for any of my clients).

The next step, of course, will be to improve the quality of the textual content on each page so that it targets holiday travelers more directly and gets them to link to these pictures (and ultimately click on the ads too) and to target my keywords better (so that only very interested people click on my site). I’ll post an update at some point so stay tuned!

But before I go…

I also signed up for Google’s AdSense for domains since I had several domains I’d purchased as part of this same experiment (but a more developed version of it). We’ll see if this provides any additional income. Here are the domains in case you’re curious:

1. Stock holiday pictures (without hyphens)
2. Stock holiday photographs
3. Stock photography royalty free
4. Stock vacation pictures (without hyphens)
5. Stock vacation photographs

We’ll see how all this turns out, but the next time someone tells you they’re bringing in 4,000€/month in AdSense income, tell them you want to see their AdSense account before you’ll believe it.

I recently implemented a newsletter subscription form in ASP.NET (2.0) for the CGIAR Secretariat on behalf of CGNET. This is the second project I’ve done in ASP.NET for CGIAR. I’ve never considered myself a skilled ASP developer and like many, picked up my ASP skills based on code I’d seen on the intertubes and via transfer from related languages. In other words, prior to this project, I was a somewhat capable ASP spaghetti coder.

Tired of producing mediocre code and eager to learn what this whole .Net thing was all about, I decided to invest some time learning how to write better ASP and take advantage of as many features of .Net as I could. Armed with two really good books on the subject (Beginning ASP.NET 3.5 in C# and VB and Programming ASP.NET 3.5, 4th Edition), I learned a lot about the .Net revolution and in the end I significantly improved the quality of my code.

DOT.NET borrows heavily from other MVC-like frameworks. I was surprised by the number of similarities between the ASP.NET-way of doing things and the Fusebox-way of doing things. The rest of this post examines some of these similarities and other aspects of working with ASP.NET. This is mostly an examination of ASP.NET from a (PHP) Fusebox developers point of view.

The Project

The CGIAR Secretariat is responsible for www.cgiar.org. The CGIAR Newsroom is one of the primary
sections of their web site. It includes an aggregate RSS news feed of all news items coming out of many of the CGIAR centers. Since many people still are unaware of the advantages of RSS, the CGIAR Secretariat asked if we could set up a system that would allow people to subscribe to the feed via email. Specifically, the system we set up allows people to subscribe and/or unsubscribe via cgiar.org which then automatically sends periodic emails of recently added news items (as they’re added, of course).

The Bottom Line

As an “experienced” web application developer I very much appreciated the ASP.NET-way of doing things. There was nothing in this project that ASP.NET wasn’t able to handle elegantly and more or less efficiently. The project consisted of implementing the following features:

  • A public subscribe and unsubscribe form with CAPTCHA
  • A nightly script that produces a notification email, with alternate views (plain text and html), consisting of news items that had not been sent in prior emails (which implies keeping track of what’s been sent and what hasn’t)
  • A password protected administration interface

The entire project was completed in about 80 hours (including an initial version of the administration interface which was later tabled).

If given the choice of doing the same project in PHP Fusebox, assuming I had the same knowledge and experience with PHP that I had with ASP when I started, would I have chosen PHP Fusebox over ASP.NET? Maybe…

Master Pages

One of the goals of any application framework is to maximize code reuse (and conversely, minimize code duplication). Functions (methods) are one example of how this is accomplished, but when it comes to the presentation level, developers often find they need a more powerful programming model. Both ASP.NET and Fusebox (and many other web application frameworks) provide tools
capable of complete solutions. In ASP.NET, a Master Page is a template for all the pages of a site although its functionality goes beyond a simple templating engine. Master Pages also let you define “behaviors” common to all pages, similar, somewhat, to the Fusebox 3 fbx_Settings file or the Fusebox 4 fusebox.init file.

A powerful templating engine will frequently go beyond “one layer” and allow the developer to subdivide sections of the Master to be handled by other parts of the application. ASP.NET offers this functionality directly and at least one project I’ve worked on in Fusebox had the same functionality. I found a few opportunities to use this feature on this project.

Fusebox does not offer a templating engine out of the box, but you can easily create much of this capability in Fusebox 4 (and to some degree in Fusebox 3) using Content Variables and a “layout” circuit. Most projects I work on offer some such circuit.

In the end I got a lot of milage out of Master Pages for this project and hope to be able to use them in future projects for CGIAR.

Postback

By default, every ASP.NET page contains a form that executes on the server. The value of the “action” attribute for the form is the current file. Microsoft has termed this approach “postback” because you post the form back to the same document that created it. In some respects this is similar to the Fusebox implementation of the Front Controller design pattern, where every request is for the same server-side script (e.g.: /index.php) followed by a Query String containing directions on what files, functions, and procedures to execute.

ASP.NET offers the developer server-side elements knows as panels. Panels are “controls” (elements?) that contain other controls. By setting the visible property of a panel you can control whether or not its contents are visible on the page. For the CGIAR project mentioned above, I used this technique to display either the subscription form or the “thank you” message following a successful subscription. I suppose that, if each ASP.NET page is the equivalent of a Fusebox circuit, that each panel could be the equivalent of a Fuseaction. You would simply set the display for all for all of them to false (except the default fuseaction 😉 ) and then display them as needed. Coming from Fusebox, I found the concept very easy to grasp.

Code-behind

Microsoft made an attempt to separate application logic from presentation with DOT.NET. In my opinion, they succeeded.

In order to minimize the amount of raw code found in HTML, .Net provides something known as “code-behind” pages, which are essentially includes with the same name as the file they are attached to. The idea is that your application code goes in the code-behind page and if you need to modify the presentation (the HTML) from within the application code, you do so by referencing elements in your HTML page via their ID attribute (this is an oversimplification but summarizes the approach).

Fusebox, on the other hand, tries to separate code by prefixing file names with one of dsp_, qry_, act_ (and sometimes lyt_).

  • qry_ files contain datasource queries and (usually) return some sort of object or array echoed to the browser by a dsp_ file.
  • act_ files are for those instances in which you need process data prior to executing a query or echoing to the browser.
  • lyt_ files are, in essence, the same as Master Pages in ASP.NET.

In ASP.NET the “HTML” files contain a LOT of .Net namespaced elements. This means the files can be completely valid XHTML (with the single, notable exception of the Processing Instructions found at the top of each page). The benefit is that these files are suddenly very portable and can be consumed by any system capable of reading XML.

If you wanted to reproduce this ASP.NET functionality in Fusebox, you would need to write a plug-in that parses the dsp_ files looking for <fbx: elements and responds accordingly. You could put all of your code in act_ files which would essentially turn them into code-behind files. Now there’s a potential open source time sucker!

Data Controls

As one would expect, the data controls are very complete, but figuring out how to do something like
nesting GridViews was not obvious and were it not for a Nested GridView walk-through article on MSDN, I never would have been able to figure out how to do it. Furthermore, I’m not convinced executing sql on EVERY ROW of a record set is really a good idea (the authors of the walk-through admit this is not the best approach, but only as it concerns caching…).

Much of the CGIAR Newsroom revolves around their RSS feed (which is a compilation of feeds from all of the CGIAR Centers). ASP.NET 2.0 and above include controls for using XML as a data source and thus facilitating the display of XML data in a web page.

Unfortunately, in version 2.0 of .Net (and possibly higher), using XML as a data source only allows you to display the data. It does not allow you to use the built-in INSERT, UPDATE, and DELETE
features of the GridView control. There are work-arounds for implementing this missing functionality but I have to wonder if you save any time hacking in the functionality vs. building the entire administration interface the “old” way (which can be done pretty quickly using XSLT). By my estimates, it’s a draw, at best.

ASP.NET Advantages

For my needs, web controls, validation in particular, significantly reduce web application development time. I simply cannot express how much I like the validation controls and WISH PHP had something similar!

IntelliSense greatly speeds up coding – Microsoft offers several free (web) application development tools that work quite well and are more than adequate for the projects I usually work on and many include IntelliSense.

I don’t think I could have completed this project as quickly without IntelliSense.

ASP.NET Ambiguities and Disadvantages

Since no language is free of sin, here is my list of things that got the best of me while working on this project:

  • web.config cannot be part of code repository since it is machine dependent. If application configuration options are so different between deployment environments, then maybe the author of the application should consider using a different development environment. I would prefer to have the application configuration code right in the application tucked into code that “sniffs” the environment and configures accordingly. This makes for MUCH more portable code.
  • .Net developers frequently publish their source code, but it usually needs to be compiled so unless you’re into that or have the time to learn how to do it (and do it right), you won’t find the same kind of huge Open Source community of code that exists for PHP
  • ASP.NET 2.0 includes some Authentication and Authorization controls, but like most stuff like this, you have to do things the ASP.NET-way or you can’t use these controls. In other words, there is no (apparent) way to retrofit these controls onto existing authentication mechanisms. In the end this is probably a good thing since most existing authentication methods are not very secure, but in the real world most clients simply are not willing to put money into changing existing systems unless you can clearly demonstrate they are broken.
  • One of the main complaints of Fusebox 4 was the XML files. Rather than being used simply for configuration, you could easily add business logic to them. More than one programmer has asked herself: “Why bother with XML to represent classes? Why not just use classes directly?” I must say, when programming in ASP.NET, I often feel like I’m simply setting application configuration parameters which, for anything but the most basic interactions, makes programming harder (and possibly more time-consuming) rather than easier (and faster) since you have to have a clear understanding of what state the application is in at the exact point where your code appears. This can be harder than it seems.

In the end, if I had to do it all over again (and had the choice), I would probably stick with PHP Fusebox but I’m grateful I had the opportunity to improve my knowledge ASP.NET and I wish the CGIAR Secretariat the best with their new system.

Additional Reading and Links

Convert ASP.NET applications into real MVC frameworks

Fusebox Basics

Comparison of Web Application Frameworks