Old Dogs

I wrote my first line of computer code in 1977, 43 years ago as of today. Back then, when I was 13, a friend and I would go to our local Radio Shack store after school or on the weekend and fool around with the TRS-80 they had on display. We were end users, loading games from the cassette “drive” connected to the machine, but we both had the desire to make the computer do something more. We wanted to make it draw pictures!

A few weeks later, my friend’s father bought a TRS-80 for his son. Victor and I spent the day learning to use it. We immediately sat down and started writing WET code, bugs, and printing it all out on a dot-matrix printer to make sure we had typed it in properly. I remember I went to his house shortly after lunch. By dinner time, after not having left his room for hours, our progress was, well, about what you would expect from two kids who had never done a programming tutorial in their lives (because they didn’t exist) and who were not terribly fond of RTFM.

After all those hours, the only thing we’d managed to make the computer do was draw an outline of a house on the screen and save the instructions on the cassette. IIRC, we were so uninformed about programming that our program consisted of something like: GO TO 1,1; PRINT 1; GO TO 1,2; PRINT 1; GO TO 1,3; PRINT 1, etc. until we’d managed to turn on every pixel in the form of an outline of a house. Needless to say, fourteen years passed before I attempted programming again.

New Tricks

In 1991 I bought a Mac LC with 4MB of RAM and a 20MB hard drive (and a color monitor, which was a novelty at the time). It came with HyperCard (“magic“) and my employer at the time gave me a licensed copy of Macromedia Director 3.1! Although Director could barely run on my low-powered Motorola 68020 processor, my world was changed forever, and for the better.

Over the course of 4 years I produced a bookkeeping system to balance my checkbook in HyperCard, two interactive multimedia games for learning Spanish in Director, and even ventured down the path of trying to write a document versioning system in HyperCard (so students could see how their documents were changed after a teacher’s review – I gave up pretty quickly on that one). And then in 1995 I received a copy of Claris Home Page, and again, my life would never be the same ever after.

Another Old Dog

In 1996 I was asked to build more than a few web sites. I soon realized that Claris Home Page, although it was actually a reasonably good WYSIWYG editor, was not the right tool for interactive sites or sites consisting of more than a few pages. I started hearing about Perl and then soon after, PHP, and this really frightening thing called Linux (and open source – shareware had existed for a while, but licensed open source was still relatively new to me).

In 1998 I created my first interactive web site that included a proper (open source) search engine. I was so fascinated by it I started to participate on the mailing list, trying to help others to understand and make use of it. I even wrote a bunch of classes in PHP that abstracted the ht-://dig interface and published them on phpclasses.org. ht-://dig really was my first experience working on an open source project but all I did was participate on the mailing list, test beta versions, and create a Mac OS X package to facilitate installation. I didn’t submit patches (what we used to call pull requests) and certainly didn’t have any kind of write access to the repo (which was probably in CVS IIRC).

Fast Forward 20 Years

Since those early years I’ve “contributed” to a variety of open and closed source projects including Sequel Pro (financial support and translations), Dreamweaver (I was a beta tester in the late 1990s), BBEdit (PHP Clippings, I’m even in the credits on that one), PHP Documentation, PHP Fusebox (a PHP port of the Cold Fusion Fusebox web application framework), and more, a lot more, more than I can actually remember TBH.

But over all these years I’ve never, ever submitted a pull request, until now!

I’ve been an avid user of wp-cli for years. I’ve been active on and off on the Slack channel and tried to report bugs and suggest changes but never went all the way. A year ago, when I was first experimenting with the wp scaffold plugin-tests command, I discovered an irregularity that I found frustrating, and finally reported it. Months went by with apparently no action on the bug (but the maintainers didn’t close the bug, to their credit). A few weeks ago another user chimed in with signs of similar frustration and I decided it was time to do something about it.

I read some instructions on how to submit a pull request and low and behold, I got some traction. James Nylen reviewed my code and left some suggestions on how it could be improved. I followed his lead and submitted my changes. Then, the principal project maintainer, Alain Schlesser, chimed in with more changes, which I addressed as best I could. This back and forth went on for a few weeks and finally, my pull request has been accepted and merged onto master. My first PR at 55 years young!

Lessons Learned

  1. It really is never too late
  2. Every action paves the way to the future
  3. Go easy on maintainers. It’s often a thankless job, so be sure to thank them along the way.

I’m feeling empowered now and you know what they say: a little knowledge is a dangerous thing… If you’re an open source maintainer and you’re reading this now, be prepared. I’m coming your way!

WordPress Integration Testing Overview and Issues

Starting almost a year ago I started experimenting with the “official” WordPress plugin testing tools while working on our Ratify plugin. I’m not a glutton for punishment but I do like a good challenge and, for the most part, I wasn’t let down.

The first half of this year we had another, ripe opportunity to work with the testing framework. We were building a custom publishing solution based on WordPress. As a custom solution, we had a few plugins to write that would allow the authors and editors to (more) easily manage their content in addition to tracking your every breath while on their web site. As a team of almost 10 developers at one point, it was crucial that we all knew how to write and run tests.

A big part of my job is developer training. I decided to create a practice plugin that we could use for training developers how to use the framework and write good tests. The plugin would allow you to assign a “countries” taxonomy to any standard or custom post type. We wrote a general description of the plugin and then a series of tests we thought would help us do this as TDD. The view is always better from on high.

Coding Environment and Tools

We use vscode with the following extensions (more, actually, but these are pertinent to this project):

Although I am a life-long BBEdit user (and occasional contributor), there are certain features in vscode that make doing this kind of development a joy (in-editor PHPCS feedback, code completion via Intellisense, multicursor editing in particular). Do note: I’m writing this post using MarsEdit but only because it’s already configured to post to my blog directly.

WP-CLI and Laravel Homestead

We use Laravel Homestead for nearly everything we do here at Secret Source. Yes, we know, Docker is the future, but we like Homestead. Homestead may require a little tweaking to use with WordPress but normally it just works.

We use wp-cli to generate the testing framework / harness (wp scaffold plugin-tests). It works well enough but this is where we start to struggle and I’m going to go into detail here because Josh Pollock was wondering what we were struggling with.

Issues with the Testing Framework

In general, the testing framework works and I can’t imagine how much effort I would have to put into it to recreate it myself so, kudos to the WordPress team for just producing this. That said, as systems get more complex, it becomes even more important to have a testing framework in your tool belt so, IMHO, more effort should be put into making the framework as easy to use as possible.

Installation and Setup Seems Difficult

The wp-cli scaffolding consists of a few files you’ll need to get the full benefit of the testing framework. It does not include the actual tools or binaries you’ll need. For example, in order to use the framework you need phpunit, and not just any version, but, in our experience, anything higher than 6.5 won’t work. Subversion is also required, so, sudo apt install subversion. I have to believe that these two steps could be encapsulated in a composer.json file but they aren’t, so two additional manual steps are required to get your environment configured.

Initializing the Database Issues

When you install the test scaffolding, it includes a bash script (bin/install-wp-tests.sh) that tries to set up the whole environment, including the database that is used when running the tests. The script, however, could be improved. On more than one occasion I’ve found myself having to manually fiddle with MySQL (deleting tables) and delete all references to WordPress in /tmp/wordpress in order to get the script to run properly. Also, at the end of the script there is a scary MySQL message about including the password on the command line. It can be ignored but there are solutions to minimize this kind of misdirection.

I would start by isolating dependencies as much as possible and including a little more sanity checking and such. Since this is written in bash, I could do it myself, but I have a feeling it should probably be some sort of PHP script, maybe even installable via composer?

There is no Way to Test Plugin Installation

As far as I can tell… the framework doesn’t allow you to test the installation hooks (installation seems to be bypassed during bootstrapping). As (bad) luck would have it, our practice plugin has a very specific requirement that it not proceed under certain circumstances, but we’re unable to automate this test.

Issues with (Vscode and) PHPCS

I’m not a big fan of vscode but it does have some very compelling features including a PHP CodeSniffer extension that helps you write properly formatted code and code, in theory, is less complex. This is a feature I really want as I need all the help I can get!

There are really two issues:

  1. The WordPress plugin test scaffolding comes with a different set of code sniffs (not sure what they are called) than the WordPress core, which is reasonable, but if you don’t know how to phpcs in general, this is going to take a fair amount of investigation to figure out, as was our case.
  2. The vscode phpcs extension has a per project installation option (if you don’t mind installing phpcs via npm) but getting vscode to find the binary can be tricky and then configuring phpcs to look at the right sniffs can be even harder. We managed to get it set up and working, mostly, but if you’re looking to learn how, I suggest you look at the wprig.io github repo for an example of how to do it.

Difficulty Defining Tests

Writing good tests for TDD is an art, plain and simple. If you ever find someone who is genuinely good at it, latch on to her and don’t let go. Learn as much as you can from her, and if you’re her supervisor, give her the freedom (and time) she needs to practice her art.

The very first test we wrote for our practice project turned out to be untestable using the WordPress testing framework. For the curious, we wanted to the plugin to fail to install gracefully if a taxonomy named “countries” already existed. Due to how the test harness is invoked, the plugin skips the normal installation processes, which is when we were planning on testing for the existence of said taxonomy. I won’t detail the time it took us to figure this out but I will say it was not insignificant and there was little documentation around this subject. We just read the bootstrapping code.

The second test makes some assumptions about what the environment will contain, what data will be available in the environment. This is fine but it means that we have to do a fair amount of mocking or even creating of data prior to running the test to be able to test things. This not a huge problem so long as the tests are relatively short. It seems to me that there ought to be a more direct, and faster way of testing this aspect but I haven’t found one yet.

The Future

Thanks to other, motivated WordPressers the future still looks kind of bright for WordPress and testing. My plan is to get this kinks resolved in the next couple of weeks. I will be publishing our sample plugin eventually so follow me on Twitter if you’re interested in seeing the final result.

After searching a rather long time and not finding anything, I decided to publish my own list of URLs as a plain text file, one URL per line.

As I do development, I sometimes need plain text files of test data. I can find Lipsum galore, placeholder photos of Bill Murry, lists of words, plain text files of varying sizes, even a place to dispose of your data (yes, trash can as a service), but I’ll be damned if I can find a single text file full of URLs. So I made my own plain text file of URLs. I hope you find this useful 🙂

You know the problem… “We’d like users to be able to authenticate using Facebook so they don’t have to create an account on our system.”

Here at Secret Source Ltd we do lots and lots of WordPress development and lately, nearly every project has required some sort of Facebook integration, the ability to log in with your Facebook account in particular. Fortunately there are multiple plugins to help you with that. We’re quite fond of the Nextend Facebook Connect plugin as it is very easy to configure and includes clear instructions on how to create a Facebook app and get everything working like a charm.

For me, though, there has always been one particular point of pain that none of these plugins have figured out how to work around: unless you want to be responsible for a Facebook app for your client’s logins, your client needs to give you THEIR Facebook credentials so you can login AS THEM to create and configure the Facebook login app.

However, I recently figured out a way around this. I know I may be late to the party but I’ve been unable to find this approach documented anywhere, or maybe I just didn’t search hard enough.

The approach is as follows:

  1. Log in to the Facebook Developer portal as yourself.
  2. Create a new app and configure it for your client, using their logo, domain, and everything.
  3. Go to Roles -> Administrators -> Add Administrators and add your client as an administrator. Note that you and your client do not need to be friends in order for you to add them, but you will need to know either their Facebook ID or Facebook “username”.

Once your client accepts your invitation to be an administrator, they can then go to the Roles tab and remove YOU from the list of administrators. The app is now THEIRS and your job is done! Note that if needed, the client can always add you back as an administrator.

This, to me, seems like a very reasonable way to have set this up and I can, for once, say I am quite happy with Facebook’s developer tools.

System Configuration

I have a CentOS 7 host machine with two KVM guests:

  1. Smoothwall 3.1
  2. Ubuntu 14.04

The Smoothwall is configured RED+GREEN+ORANGE with three separate network interfaces. The RED and GREEN interfaces are connected via bridged Macvtap with virtio drivers to two separate host interfaces (enp5s0 and enp6s0) which, in turn, are connected to two physical network cards on the machine. ORANGE is connected to a guest-only virtual switch. RED is connected to another network / gateway that connects the Smoothwall to the Internet, same as the host interface on the same card (but with different IP addresses – the RED is static, entered manually and the host is dynamic – DHCP).

After a restart of the host, all machines, host and guests, have all of the expected network connectivity, and everything just works.

The Problem

Where I live we experience periodic Internet outages lasting from a second or two to several minutes. Every time there is an outage the RED interface loses all connectivity, both inbound and outbound, once connectivity to the host has been restored (and possibly during the outage).

Following the loss of connectivity, in the host Settings -> Network control panel the interface associate with RED (Macvtap0), which is normally visible and editable, is still visible but it’s options cannot be changed as the Options button is missing. Also, the ON/OFF button is missing too. Furthermore, it no longer has anything more than a MAC address (normally it has both a MAC and an IPV6 address).

Control Panel Before Outage

Macvtap working properly

Control Panel After Outage

Macvtap failing

I’ve tried changing the driver (virtio, rtl8139, e1000, etc.) and I’ve tried changing the physical hardware but neither change has altered the behavior so I’m pretty sure it’s something in the Macvtap software that’s failing (or misconfigured).

The Solution?

There are multiple articles and bug reports that seem to indicate that this can be repaired by enabling promiscuous mode on the interface, but I tried that (sudo ifconfig Macvtap0 promisc) to no avail. Maybe I needed to toggle the interface (ifup or something like that?)

I’m new to WordPress development and recently found myself needing to create a secondary query using WP_Query. The Codex has a pretty good introduction to querying WordPress using WP_Query but no where, not even in the new Developer documentation, do they say that you MUST call the_post() inside the loop. If you fail to call the_post(), you may get the error “Fatal error: Maximum function nesting level of ‘100’ reached, aborting!”

To the documentation team’s credit, they do point out that if you fail to call the_post(), many template tags will fail to function as expected. Also, in a few places they do mention that the_post() increments the internal counter by one. It makes me wonder why have_posts() doesn’t just call the_post() internally and be done with it…

I lost hours trying to figure this out. And just for the sake of completeness, here is a full example of what WordPress needs to properly execute a loop as a secondary query (and it matters not that this is against a custom post type):

$my_query = new WP_Query( $args );

while( $my_query->have_posts() ) {

$my_query->the_post(); // does nothing visual, just sets some variables and advances the internal pointer

the_title(); // note that we do NOT use $my_query->the_title(), I have no idea why not…

}

wp_reset_postdata(); // reset the main post variable, $wp_the_post to the main query

And it should be noted that all those posts about the problem being XDebug are all just wrong. The problem is not the limit placed on the number of nested function calls (100 seems far, far more than adequate) but the fact that you’ve reach 100 in the first place.

I sincerely hope this helps others in the same situation.

Although I’ve been using XSLT on a variety of projects for nearly 10 years now, I’m still stuck using XSLT version 1.0 only processors and frequently turn to “keys” to solve complex grouping problems. A few weeks ago I was presented with a grouping problem that put my knowledge to the test and forced me to fully understand how keys work and how best to set them up. Thanks to the fabulous people on the XSL mailing list, I was given lots of valuable feedback and pointed in the right direction. I feel it’s only fair to share what I learned: the USE attribute is the XPath to the node whose data you want to use as the key (the “grouper” if you will). Furthermore, the XPath is relative to the ELEMENT in the MATCH attribute.

Normally it’s as easy as specifying an attribute to use as the key but let’s consider an example in which all the elements are the same and the only thing that uniquely identifies them is their location relative to each other. Consider a table with the following structure (TD elements with nothing below them have ROWSPANs set):

td td td td td td
td td td td td td
      td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
td td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
      td td td td
   td td td td td
td td td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
   td td td td td
      td td td td
      td td td td
      td td td td
      td td td td
      td td td td

In a KEY element, you identify the source element you wish to capture (TR in this case) in the MATCH attribute. In the USE attribute you specify an XPath statement indicating the node the matched element will belong to (be grouped by). This is fairly straight forward when the input is well defined, but in a situation like ours, where structure is the only thing we can key off of, and the structure itself is somewhat amorphous, it can be quite difficult to write the proper XPath.

Let’s say the first row is the header row. It should not be included in the output as it simply contains labels for the columns. Rather than accounting for it in the key element, we’ll simply skip it when applying templates (select="//tr[position() > 1]").

The key (or group name) for the matched elements will come from rows containing 6 or more TD elements. Specifically, the data will come from the first TD element in those rows. The XPath would be something like this ancestor-or-self::tr[count(td) >= 6]/td[1]. Unfortunately, this code only groups rows in which there are 6 or more TD elements. Rows with 5 or less TD elements are left out of the result set. For rows with 5 or less TD elements we will need to look up in document order and stop at the first row above them containing 6 or more TD elements. This is where it gets complicated… This could be solved with some sort of IF-THEN-ELSE construct but since we’re using XSLT, that’s not the best approach.

Instead, we’re going to capture ALL the potential keys above the current row and filter out the ones we don’t need.

To “look up” we use preceding-sibling: ancestor-or-self::tr/preceding-sibling::tr[count(td) >= 6]. This will give us ALL the rows with 6 TD elements preceding the currently matched row. However, we only want the row [with 6 TD elements] that immediately precedes the currently matched row, and not all of the preceding siblings. Thus we append: [position() = count(.)] which gives us the last item in the set (not exactly sure why last() doesn’t work here, but it doesn’t) followed by the first TD element in the row: /td[1].

Finally we filter out the nodes we don’t need. We do this by joining the statements via the pipe character | and enclosing the whole thing in parentheses, and from that result set we take the very last element: [last()], which is exactly the key we are looking for. Here is the final key element:

 

<xsl:key 
	name="ports-by-ship" 
	match="tr" 
	use="(ancestor-or-self::tr[count(td) &gt;= 6]/td[1] 
		| ancestor-or-self::tr/preceding-sibling::tr[count(td) &gt;= 6][position() = count(.)]/td[1])[last()]" 
/>

 

Because it’s hard to see the result of such a complex XPath, I first run the transformation using a template match on TR elements and copy-of the results to the output. That way I can see what my XPath is actually producing. Once I’ve got the set I’m looking for, I move it into a key element.

They say you don’t really know something until you can explain it to someone else. I’m not sure if I’ve succeeded in explaining it or not, but I feel like I’m much, much closer than I was when I was presented with this problem a few weeks ago.

My XML input.

My XML output.

My XSLT.

Just a quick note for those experiencing the same issue. After a fresh install of an LTSP server from the Ubuntu 10.10 (Maverick Meerkat) alternate CD I was unable to connect from any of the thin clients. I kept getting a TFTP timeout (but DHCP was clearly working).

After checking all the variables mentioned in this article, I discovered that the filename for pxelinux.0 in /etc/ltsp/dhcpd.conf ended in .tmp as in: filename "/ltsp/i386/pxelinux.0.tmp";. I don’t know if this is a bug in the installation program or what, but removing “.tmp” worked like a charm and everything is now up and running, and I’m thrilled!

After 7 years of working as a Web Developer remotely from the island of Gran Canaria (and nearly 20 years in some IT related position), I started teaching IT to high school students here in the Canary Islands. Working with teens has been an eye-opener, to say the least…

More than 50% of my students had never used email and had never heard of Netiquette at the start of the school year. Although the curriculum from prior years included the creation of PowerPoint presentations, writing blogs, and modifying HTML, not one student knew how to set a margin or a tab in a word processing application. I was agahst! How could such gaps in basic IT knowledge be tolerated? Where was the curriculum designer? Who gave all these kids email addresses without making them take (and pass) a test on Netiquette first?

To their credit, what they did learn (creating videos, for example) they learned pretty well. Nevertheless, in the business world (and for the foreseeable future) formal business communication (contracts, proposals) takes place in writing, not video, and via email, not via Tuenti. Furthermore, these students, moreso than those who came before, absoultely MUST master computer mediated communication if they ever hope to succeed in their careers.

For these reasons I decided to conduct a series of interviews with some of my former (and present) clients, co-workers, and related software developers. In these interviews we discuss a variety of aspects of working remotely. Most of the people I spoke with coincided on one thing in particular: being able to express yourself clearly, in writing, is the deciding factor of whether or not to work with you. One of the interviewees put it this way: “I am going to quickly look for ways to eliminate 95% of [the resumes that cross my desk].” Expressing yourself poorly in writing makes you a likely target for elimination and this series of interviews is intended to drive that point home.

Now that I’ve edited down the videos and watched them all myself, I’m surprised how consistently the following themes came up:

  • There must be trust between both parties, but it’s not that hard to achieve.
  • Expressing yourself clearly and effectively in writing is crucial to your success.
  • Most problems that arise are the result of a lack of trust.

The café where I recorded most (but not all) of these interviews was my favorite corner café here in Las Palmas: Coffee Break.

The interviews that follow have been edited down to fit within the 15 minute maximum allowed by YouTube.com, but there was a lot of great stuff left on the cutting room floor… Click the names of each person to watch the video and enjoy!

Following my upgrade to Snow Leopard, Apache started producing segfaults for virtually any request. I tried reinstalling the Entroyp.ch PHP package (which doesn’t work with Snow Leopard), commenting LoadModule php5_module in /etc/apache2/httpd.conf, and a host of other things but the thing that ended up solving the problem for me was:

  1. Be sure to use the PHP that comes with Snow Leopard (leave LoadModule php5_module uncommented – don’t use the entroyp.ch package, sorry Marc!)
  2. Comment out LoadModule dav_svn_module /opt/subversion/lib/svn-apache/mod_dav_svn.so

The bottom line is, any modules that were not built against the current (Snow Leopard) version of Apache will probably cause some sort of segfault.

Since the version of PHP that comes with Snow Leopard may be missing some of your favorite extensions, here’s a link to some instructions on how to include them (untested by me): Making Snow Leopard’s PHP 5.3.0 usable in the real world

I sure hope this saves someone the 3 hours of pointless poking around that I lost this morning!