Tags: results

10

sparkline

Saturday, November 14th, 2020

Introducing Simple Search – The Markup

A browser extension that will highlight the actual search results on a Google search results page—as opposed to Google’s own crap. Handy!

Or you can use Duck Duck Go.

Tuesday, July 28th, 2020

Google’s Top Search Result? Surprise! It’s Google – The Markup

I’ve been using Duck Duck Go for ages so I didn’t realise quite how much of a walled garden Google search has become.

41% of the first page of Google search results is taken up by Google products.

This is some excellent reporting. The data and methodology are entirely falsifiable so feel free to grab the code and replicate the results.

Note the fear with which publishers talk about Google (anonymously). It’s the same fear that app developers exhibit when talking about Apple (anonymously).

Ain’t centralisation something?

Thursday, December 12th, 2019

How readable—Findings

The results are in for Daniel van Berzon’s most recent experiment into accurately measuring code readability. You can read the results and read about the methodology behind them.

Thursday, September 6th, 2018

Chrome’s NOSCRIPT Intervention - TimKadlec.com

Testing time with Tim.

Long story short, the NOSCRIPT intervention looks like a really great feature for users. More often than not it provides significant reduction in data usage, not to mention the reduction in CPU time—no small thing for the many, many people running affordable, low-powered devices.

Wednesday, April 19th, 2017

Approximate Text Search Made Easy

A step-by-step explanation by Henrik on how he implemented fuzzy search on his music site—something I could do on The Session. He even talks about expanding this to work with ABC notation.

Wednesday, January 6th, 2016

re:Work - Superpowers at work: OKRs

We’re about to start trying out OKRs (Objectives and Key Results) at Clearleft. It’s a terrible, jargony label, and a lot of the discussion around them is steeped in valleywank, but I think they could be a useful way of helping shared understanding within a company.

I’ll be having a read through the accompanying guide.

Wednesday, March 25th, 2015

Gridset · Responsive Report 2014

Results of a survey of over 1000 people working on the web. It’s beautifully put together and the overall trajectory regarding responsive design looks pretty positive to me.

Thursday, May 6th, 2010

HTML5, ARIA Roles, and Screen Readers in May 2010 — Research — Accessible Culture

Test results for screen readers navigating content that uses new HTML5 elements and ARIA roles.

Tuesday, May 4th, 2010

Color Survey Results « xkcd

The wonderfully detailed analysis of a colour questionnaire.

Monday, September 14th, 2009

HTML5 test results

As promised, I’ve gathered the data from one of the exercises I administered at the dConstruct HTML5 and CSS3 workshop and totted the results up in a table.

There were 30 people in the workshop but I only managed to retrieve 22 results—I don’t know what happened to the missing eight sheets of answers. This is a smaller sampling than I was hoping for and I realise that it’s too small to be considered scientifically accurate but I think it’s still interesting to see the responses of 22 smart, savvy web developers.

Across the top (in the table header) are the possible answers; nine new elements in the HTML5 spec. Each row shows the answers given for each element as workshop attendees attempted to match up the names of the elements with the nine definitions provided from the spec. The most common answer in each case has been highlighted.

HTML5 Workshop Results
article aside details figure footer header hgroup nav section
article 5 0 2 1 1 1 0 1 9
aside 1 11 1 3 2 0 0 0 2
details 1 6 8 1 4 0 0 0 0
figure 3 2 2 14 1 0 0 0 0
footer 0 0 10 0 10 1 0 0 1
header 0 0 0 2 0 16 0 4 0
hgroup 0 0 0 0 0 1 21 0 0
nav 0 0 1 0 0 0 0 19 2
section 12 0 0 0 1 1 1 0 7

Ideally, the table would show a nice uniform graph: a solid diagonal line from the top left left to the bottom right. The diagonal line is there but it isn’t exactly uniform.

For the record, here are the element names together with their correct definitions. I’ve included a little sparkline with each one to show the distribution of answers. A unanimous result would show one clear spike. Wobbly uneven sparklines indicate a corresponding uncertainty in the answers.

article

…a section of a page that consists of a composition that forms an independent part of a document, page, or site.

aside

…represents a section of a page that consists of content that is tangentially related to the content around it, and which could be considered separate from that content.

details

…additional information or controls which the user can obtain on demand.

figure

…some flow content, optionally with a caption, that is self-contained and is typically referenced as a single unit from the main flow of the document.

footer

…typically contains information about its section such as who wrote it, links to related documents, copyright data, and the like.

header

…a group of introductory or navigational aids.

hgroup

…used to group a set of h1–h6 elements when the heading has multiple levels, such as subheadings, alternative titles, or taglines.

nav

…a section of a page that links to other pages or to parts within the page: a section with navigation links.

section

…a thematic grouping of content, typically with a heading, possibly with a footer.

There are some interesting results here; some of them surprising, some of them expected.

The most easily identified element is hgroup. That’s also the only element name that isn’t taken from an existing English word. I don’t think that’s a coincidence. When the name of an element is created specifically to describe what that element does, it’s no surprise that the name and the definition are tightly coupled.

I was expecting to see more confusion between aside and figure. Both elements serve a similar purpose and in my opinion, the names could be swapped around without changing the definitions in the spec. But the data doesn’t support my hypothesis.

The confusion around footer surprised me. I would have expected to see more confusion between footer and aside than between footer and details.

The most glaring problem lies with section and article. This doesn’t surprise me. At this point, section and article are practically indistinguishable. For a while there, article had some more optional attributes—cite and pubdate—but those have now been removed. Now section and article have the same content model and appear to cover much the same kind of content:

…a thematic grouping of content, typically with a heading, possibly with a footer.

…a section of a page that consists of a composition that forms an independent part of a document, page, or site.

Witness the recent article on HTML5 Doctor wherein Bruce attempts to clear up the confusion and points out where the HTML5 Doctor site has been getting it wrong. If it’s tough for those smart guys to figure out when to use section, article or div, it doesn’t bode well for the rest of us.

Choosing the right element would be easier if there were some clear rules like “You can only use sections within articles” or “You can only use articles within a section” but as it is, you can nest sections in articles and articles in a section. With that much rope, authors are likely to hang themselves, overdosing on the semantic freedom.

I don’t think there needs to be a section element and an article element. I don’t have a particularly strong opinion about which one should stay and which one should go but a little trimming is definitely in order.

Now, if you’ll excuse me, I’m off to present this data to the list.