"Gamification is the use of game-thinking and game mechanics in non-game contexts in order to engage users and solve problems" -- wikipedia
Gamification sneaks into a software developer's life whether he/she likes it or not. Some work for me, some don't.
What works for me
PyPI downloads on my packages
Although clouded with inaccuracies and possible false positives (someone's build script could be pip installing over zealously), seeing your download count go up means that people actually depend on your code. Most likely, they're not just downloading to awe, they download to use it.
Github followers and Starred projects
Being followed on Github means people see your activity on their dashboard (aka. home page). Every commit and every gist you push gets potential eyes on it.
When people star your project it probably means that they're thinking "oh neat! this could come in handy some day
so I'll star it for now". That's kinda flattering to be honest.
This doesn't apply to everyone of course but to me it does. I really try my best to write about work or code related stuff on Twitter and personal stuff on Facebook. Whenever a blog post of mine gets featured on HN or if I present at some conference I get a couple of new followers.
Some people do a great job curating their followers, responding and keeping it very relevant. They deserve their followers.
Yes, there are a lot of bogus Twitter accounts that follow you but since that happens to everyone it's easy to oversee. Since you probably skim through most of the "You have new follower(s)" emails, it's quite flattering when it's a real human being who does what you do or somewhat similar.
Activity on Github projects
This one is less about fame and fortune and more of a "damage prevention". Clicking into a project and seeing that the last commit was 3 years ago most definitely means the project is dead.
I have some projects that I don't actively work on but the code might still be relevant and doesn't need much more maintenance. For those kind of projects it's good to have some sporadic activity just to signal to people it's not completely abandoned.
Hacker News posts and comments "Show HN: ..."
I've now had quite a few posts to HN that get promoted to the front page. Whenever this happens you get those almost [embarrassing spikes in your Google Analytics account/static/cache/7c/3b/7c3be91fa89401add4f423e944878706.jpg).
However, it happened. Enough people thought it was interesting to vote it up to the front page.
It's important to not count the number of comments as a measure of "success" because oftentimes comments aren't simply constructive feedback but just comments on other comments.
Keep this one simple, the fact that you have built something that is "Show HN:..." means you probably have worked hard.
What does NOT work for me
Unit test code coverage metrics
Test coverage percentages are quite a private matter. Kinda like your stool. Unless something amazing happened, keep it to yourself.
It's nice to see a general increase of the total percentage but do not dare to obsess about it. What matters is that you look through the report and take note that what matters is covered. Coverage on code that is allowed to break and isn't embarrassing if it does, does not need to be green all the way. Who are you trying to impress? The intern you're mentoring or the family you don't have time to spend time with because you're hunting perfection?
I must, however, admit that I too have in the past inserted
pragma: no coverin my code. Also, being able to say that you have 100% test coverage on a lib can be good "advertisement" in your README as it instills confidence in your potential users.
Number of tests
When you realize that 1 nicely packaged integration test can test just as much as 22 anally verbose unit tests you realize that number of tests is a stupid measure.
A lot of junior test driven developers write tests that cover circumstances that are just absurd. For example "what if I pass a floating point number instead of a URL string which it's supposed to be??".
Remember, results and quality count. Having too many tests also means more things to slow you down when you refactor.
On projects with multiple contributors commit counts is not a measure of anything. It has no valuable implications or deductions. Adding a newline character to a README can be 1 count.
If you skim through the commit log on a Github project you'll notice that surprisingly many commits are trivial stuff such as style semantics or updating a
Yes, someone has to do that stuff too and we're always appreciative of that but it's not a measure of excellence over others. It's just a count.
Resolved bugs/issues count
If this mattered and was a measure of anything you could simple just swallow everything with a quick turnaround and resolve or close it.
But not every bug deserves your attention. Even if it is a genuine bug it might still be really low priority which working on costs time and focus distraction away from much more important work.
Number of releases
It's nice to see projects making releases (or tags) but don't measure things by this. There's so much good quality software that doesn't really fit the release model.