Peterbe.com

A blog and website by Peter Bengtsson

Filtered home page! Currently only showing blog entries under the category: Work. Clear filter

Being a recruiter is hard work. A lot of pressure and having to deal with people's egos. Although I have no plans to leave Mozilla any time soon, it's still some sort of value in seeing that my skills are sought after in the industry. That's why I haven't yet completely cancelled my LinkedIn membership.

When I get automated emails from bots that just scrape LinkedIn I don't bother. Sometimes I get emails from recruiters who have actually studied my profile (my blog, my projects, my github, etc) and then I do take the time to reply and say "Hi Name! Thank you for reaching out. It looks really exciting but it's not for me at the moment. Keep up the good work!"

Then there's this new trend where people appear to try to automate what the bots do by doing it manually but without actually reading anything. I understand that recruiters are under a lot of pressure to deliver and try to reach out to as many potential candidates as possible but my advice is: if you're going to do, do it properly. You'll reach fewer candidates but it'll mean so much more.

I got this email the other day about a job offer at LinkedIn:
Shaming a stressed out recruiter from LinkedIn

  • I have a Swedish background. Not "Sweetish". And what difference does that make?
  • I haven't worked on "FriedZopeBase" (which is on my github) for several years
  • I haven't worked on "IssueTrackerProduct" for several years
  • Let's not "review [my] current employment". That's for me to think about.

So what can we learn from this? Well, for starters if you're going pretend to have taken time, do it properly! If you don't have time to do in-depth research on a candidate, then don't pretend that you have.

I got another recruiter emailing me personally yesterday and it was short and sweet. No mention of free lunch or other superficial trappings. The only personal thing about it was that it had my first name. I actually bothered to reply to them and thank them for reaching out.

London Frock Exchange launched Today we launched The London Frock Exchange which is a joint project between Fry-IT, Charlotte Davies and Sarah Caverhill

Elevator sales pitch: Unlike other clothes swapping sites, with Charlotte, Sarah and Rani as an expert hub in the middle you don't swap straight across; no you swap one frock in and can choose a frock (of equal value) from the pool of frocks.

Fry-IT is co-founding this venture and hope it'll make us billionaires by the end of the year (They take a small admin fee of £25 for sending you a frock back but sending it in is free with freepost). It's been great fun to work on it over the last couple of months as it means we (Fry-IT is a all-male highly technical company) have had to learn about sizes, body shapes and trying to learn how a female web audience thinks. The ladies have done a great job of seeding it with lots and lots of frocks all of which you can wear in a matter of days if you just swap one of equal value in first. Enjoy!

This is the second part of the summary of the Google London Automation Test conference that I blogged about.

The conference lasted two days. Here's a brief summary of what happened on the second day which was on the same theme as the first: automated testing of software. The highlight of the day was Goranka's talk about performance testing.

Goranka Bjedov The first talk was by Robert Binder but I unfortunately missed this talk since I arrive a bit late this morning. Sorry Robert.

The second talk was by googler Goranka Bjedov about Using Open Source tools for performance testing. She partly talked about what tools the use and how they use them at google without revealing too much of course. Goranka is a UNIX biggot and made some funny remarks about Microsoft, Powerpoint and the uselessness of virtually all proprietary stresstesting tools. I asked her about stresstesting over http but not just the HTML but also the images, CSS and Javascript; she said JMeter can do all of that and that she highly recommends it.

The next talk was by Uffe Koch from Motorola about how to automate testing of mobile phone apps with all of their little quirks. I didn't understand much of the details but it's even more obvious now what a challange testing mobile apps can be.

Jason Huggins, fame author of Selenium had brilliant yet technically failing talk about an idea he has: You include a Selenium script into your CruiseControl (or equivalent build tool) and as soon as you do a source commit the system automatically runs the latest build + selenium test in IE, Firefox, Safari etc. on all various operating systems all hosted on a Mac running virtual operating systems. His final goal is a system that, fed a Selenium test could spit out screencasts of failed results automatically from all various combinations of systems. Really really looking forward to seeing this more polishing and working.

You've probably all heard of Google setting up free WiFi for the whole of Mountainview. Karl Garcia, the guy who ran this had the next talk about how they accomplished it, the testing they did and their plans to also do it over San Fransisco. Wow! I didn't realise that it was such a large task even for a big company like Google.

The last talk was by Adam Porter about Distributed Continuous Quality Assurance. He talked of something, I think, called Skoll which is the software system that has proven this to work. The basic idea is that you build up a matrix of nodes that have a similar configuration to yours and use this large grid to automatically help you run some tests to share the load. Clever!

In conclusion...

It was a really interesting conference. A LOT of the technical stuff people talked about went far over my head. I'm not a full-time test engineer unlike many people there, I wrote on my application form something about wanting to learn about automated testing. Not to learn more, because I already know very little.

A good thing about the conference, apart from the food, was the google itself had cherrypicked attendees which meant that we were all on our own which made it very easy to make contact with people. Sometimes at conferences that's a lot harder because people often hang out with their colleagues only.

This whole thing has given me boost on writing more tests and thinking bigger than just writing simple unittests which is all I barely do at the moment.

One last thing, a googler from the Trondheim office told me about the "15G". "What's that?" I asked. "When you join Google you gain 15 pounds [7 kilos]"

UPDATE

Here are the videos from all the talks Enjoy!

I'm writing this from Googles office in London, UK where Google is hosting a conference on Automated Testing. Testing of software that is. There's been very little talking about usability testing or hardware testing. There's been a lot of talks about taking unittesting taken to the next level.

I'm going to blog again about the stuff I learn today. Below is a summary of the stuff I learnt yesterday. Just coming to see the Google office was a very interesting experience. It's very impressive. The office building is beautiful and has a nice feel to it. One of the most impressive things about this office is the free-food canteen which I've used almost excessively but nobody except my girlfriend and my kung fu teacher will minds. I'm only here for two days and they can afford it :)

Yesterday, some people of HP Lab talked about SmartFrog which was an advanced system for not just testing the applications but also testing the production system. It seems like a good idea to do more testing on the production deployment and not just the source code. Most of it went over my head.

Following that was a talk from Kizoom about Agile Testing Language where they've developed a Java module that makes it possible to write unittesting source code in Java that looks very naturual English-like. That meant that source code became so easy to understand that customers or business analysts can join in on the test creation. Reminds me about the statement about Python that "Python is executable pseudo code".

After that was a talk by two Swiss guys from Lifeware about testing their insurance software by creating tests from real objects. Some insurance product cases are so complicated that it would have been difficult to create a test case beforehand. Seemed like a good idea but people seemed inheritently negative to the idea because tests should be written before it's too late.

Following that was a talk by Rick Mugridge about Fit, FitNesse and storytesting. Rick is supposed to be some sort of guru on the subject and when the presenter presented him he said: "A tool you've probably all heard of: Fit". I haven't. Haven't heard of him either. Sorry Rick, testing really isn't my thing. The only testing I do is some simple functional unittesting.

The next talk was about using contracts in Eiffel to automatically create tests. A very neat thing that he demonstrated was the ability to "filter" the source code to only those lines that matter when a test fails. Almost like a traceback when an exception is thrown but with a few more capabilities.

Lastly it was a talk by two googlers called Joe and Adam(?) about how to reconsider an AJAX application to refactored components that can be unittested. It was mainly just common sense but their demo was interesting because the code that they refactorered looked aweful. It was more or less one big function that did all the various tasks which was later split up neatly into a view, controller and datasource. I said it was all common sense stuff but, to be honest, the initial code that they refactored looked a lot like some of the javascript stuff that I produce. The title of their talk was witty: "Does my button look big in this? Building Testable AJAX Applications"

To be continued...

UPDATE

Pictures here

About the second day

A database table that I've had to work with has a something called identifiers which are humanreable names of questions. There's a HTML template where the question designer can place the various questions in a human-sorted order. All questions get identifiers in the form of <something><number> like this: Head1 or Soma3 or Reg10 where Head, Soma and Reg are sections. Changing the nameing convention from Head1 to Head01 is too late because all the templates already expect Head1 not Head01. Initially I sorted the identifiers like this:

SELECT id, identifier 
FROM questions
WHERE section=123
ORDER BY identifier 

The result you would get is:

Head1
Head10
Head2
Head3
...

The human readable sort order should have been this:

Head1
Head2
Head3
...
Head10

To solve this now all I needed to do was to extract the two digit part and cast it as an integer. Like this:

SELECT id, identifier,
  SUBSTRING(identifier FROM '[0-9]{1,2}')::INT AS identifier_int
FROM questions
WHERE section=123
ORDER BY identifier_int, identifier

The reason I who a second order by key is because some identifiers look like this:

Head9a
Head9c
Head9b

There are several ways to do case insensitive string matching in SQL. Here are two ways that I've tried and analyzed on a table that doesn't have any indices.

Option 1:

(
 LOWER(u.first_name) = LOWER('Lazy') OR 
 LOWER(u.last_name) = LOWER('Lazy') OR
 LOWER(u.first_name || u.last_name) = LOWER('Lazy')
)

Option 2:

(
 u.first_name ILIKE 'Lazy' OR 
 u.last_name ILIKE 'Lazy' OR
 u.first_name || u.last_name ILIKE 'Lazy'
)

A potentially third option is to make sure that the parameters sent to the SQL code is cooked, in this case we make the parameter into lower case before sent to the SQL code

Option 1b:

(
 LOWER(u.first_name) = 'lazy' OR 
 LOWER(u.last_name) = 'lazy' OR
 LOWER(u.first_name || u.last_name) = 'lazy'
)

Which one do you think is fastest?

The results are:

Option 1:  2.0ms - 2.5ms (average 2.25ms)
Option 1b: 2.0ms - 2.1ms (average 2.05ms)
Option 2: 1.7ms - 2.0ms (average 1.85ms)

Conclusion: the ILIKE operator method is the fastest. Not only is it faster, it also supports regular expressions.

I've always thought that the LIKE and ILIKE were sinfully slow (yet useful when time isn't an issue). I should perhaps redo these tests with an index on the first_name and last_name columns.

Here at Fry-IT we use timesheets, like so many other companies, to track the time we spend on each client project. Despite being a very "web modern" company we still don't use a web application to do this. What we use is a python script that I wrote that uses raw_input() to get the details in on the command line. The script then saves all data in a big semicolon separated CSV file and is stored in cvs. This works quite well for us. It's in fact all we need in terms of actually entering our times which is usually very easy to forget.

But, here's an idea for a timesheet tracker that will not guarantee but will really help in not forgetting to fill in your timesheets. The idea is that you have a web application of some sort that is able to send out emails to registered individuals. These emails will be sent at (a configurable time) the end of the work day when you're about to leave for the day. You might have seen this before on other timesheet tracker applications; it's not new. What is new is that the email would contain lots of intelligent URLs that when clicked fills in your timesheets for that day.

Everybody can click on URLs in emails to open them in a nearby web browser. Obviously all of these URLs need to contain information about the day and the login credentials of the user so that you don't have to login on some site after you click the URL. Every URL would thus contain login stuff and a particular entry to the timesheet tracker. Something like this:

http://timesheeting.com/Xgt4q/_8_hours_Project_ABC
http://timesheeting.com/FpE26/_6_hours_Project_ABC
http://timesheeting.com/2Jt9a/_4_hours_Project_OCH

The first part of the URL is an encoding of the user's login and the date (date of when the email was created) and the second part is so readable that you can find which one suits you by simply reading the URLs. If you need to enter a comment for every piece of work you do, that comment form can be shown when you click the URL on the site.

Another very important detail is that the system has to be smart enough to know which links it should offer. It can do this by cleverly looking back at what the user entered the last time and the time before that etc. It should require much, you hash every different combination of hours and project and sort by last usage date. If you need to start tracking a new project or an exceptional number of hours then the email alert isn't for you. Remember, it's just a clever improvement to the usual "Don't forget to fill in your timesheet!".

Now, feel free to steal this idea on your own timesheet tracker applications. I've got too many other dreams that I need to try first. Writing about it means that I at least won't forget about the idea.

UPDATE

This comic is soo relevant and soo funny that I just have to include it dilbert20060146538113.gif

Since I always forget how to do this and have to reside to testing back and forth or to open a book I will now post an example patch that alters a table; correctly. Perhaps this is done differently in Oracle, MySQL, etc. but in PostgreSQL 7.4 you can't add a new column and set its default and constraints in the same command. You have to do that separately for it to work and most likely you'll have to run a normal update once the column has been added.

Just doing:

ALTER TABLE users ADD mobile VARCHAR(20) NOT NULL DEFAULT '';

...won't work and you'll get an error.

Suppose for example that you created a table like this:

CREATE TABLE users (
"uid"                  SERIAL PRIMARY KEY,
"password"             VARCHAR(20) NOT NULL,
"email"                VARCHAR(100) NOT NULL DEFAULT ''
);

But what you really want is this:

CREATE TABLE users (
"uid"                  SERIAL PRIMARY KEY,
"password"             VARCHAR(20) NOT NULL,
"email"                VARCHAR(100) NOT NULL DEFAULT '',
"mobile"               VARCHAR(20) NOT NULL DEFAULT ''
);

Either you can drop the table and all its content and start again or else you can do the following:

BEGIN;

ALTER TABLE users 
 ADD mobile VARCHAR(20);

ALTER TABLE users
 ALTER mobile SET DEFAULT '';

UPDATE users
 SET mobile = '' WHERE mobile IS NULL;

ALTER TABLE users 
 ALTER mobile SET NOT NULL;

COMMIT;

There's lots of small pieces of knowledge in our company. Not the kind of knowledge that requires thinking but stuff like,

  • where's the black stapler
  • how to add a domain to the xyz-server apache config
  • who to call to sort out the airconditioner

Most of this core "knowledge" we have tried to store in a relatively structured Wiki (we use zwiki) which has been a really good start. It's good because whenever I need to refresh my memory on some IP address or how to install a printer I can go to our company wiki and search for it there.

The problem is that it's such a choir to maintain the wiki. It takes several seconds to go there, log in and (biggest bore) to find the most appropriate places to write anything new or where to update something old. I know I sound disgustingly lazy, but when you have to do it many times per day you want the software to help you rather than being an obstacle. I'm now instead looking for a different solution: a blog!

Blogs are great because they feel familiar and there are generic tools surrounding them such as neatly designed templates and RSS readers. There are more ready Wiki solutions than good Knowledge Base solutions; and there are more blog solutions than there are Wiki solutions. This means that there are better options for getting really good software and keeping that software maintained. An added benefit of using a blog for maintaining team knowledge is that there's a natural chronological order to it. That means that old blog posts are less relevant than new ones which becomes a useful benefit when you search for pieces of knowledge. I want something like this for my company. All people should post small blog items every day for all tasks they do that might happen again. It could also be used just to think loud and to let other people know what you're working on.

Can you help? I need some suggestions on good blogging tools for a closed group (Blogger.com don't offer password protected blogs). Here's what we'd need:

  • ability to quickly post to it with GUI apps and good web admin for posting
  • ability to secure (or even better, to host it ourselfs) access
  • non-proprietary storage/ ability to export & back up content
  • not Microsoft
  • fast and with great sorting/filtering functions and a clever search tool

Any suggestions? I'm confident that we're ready to pay for it so it doesn't have to be GPL (for once :).

Sorry about the cryptic title. My actual salary has not gone down but if you are to believe this chart the salary (in the US) for "Web programmer/developer (back end systems)" has gone down with 2.2% in the last five years but actually gone up by 8.2% in the last year.

What is quite interesting is that of "Content developer" which has seen a rise of 6.5% in the last five years. I guess that's the blogging. More and more people get employed now just to blog about a particular industry. This seems to be a modern trend that we'll see more and more of now that setting up a blog of your own is so much easier (I wrote mine from scratch :). I wonder where these "content developers" come from; their individual technical industries such as programming or design or if they come from literary background like book writing.

Anyway, looks like the general trend is that all salaries have gone up from 2004 to 2005.