Web 2.0 and Information Architecture

Information Architecture



The Information Architecture Institute states that information architecture (IA) is how to arrange the information to make it understandable. The  article  ‘Information Architecture for the World Wide Web’ listed these technical aspects web creators consider when presenting information;

  • Organisation system: Present information to us in a variety of ways
  • Navigation system: Help users to move content
  • Search system
  • Labelling system

An assumed ultimate goal for fulfilling IA on the Internet according to theguardian is for user experience design or the enhancement  of customer satisfaction. Specifically, it is related to information usability and findability.

 A further look into Web 2.0

As noted in my week one post, Web 2.0 is an Internet ideology that has ‘capabilities, technologies and principles’ with this week delving deeper into what that involves. Earlier noted characteristics that I will refer back to are collaborative technology, software as a service and its value being increased when there are more users. I will also link back to information architecture.

Collaborative technology

Taxonomy is a formalised, scientific version of classifying items into groups through shared features so people are able to ‘distinguish’ a group from others and ‘identify’ specific aspects. In terms of Web 2.0, Gartner states that ‘folksonomy’ or tagging is about collective intelligence, as users identify and connect the content both created and seen. Although more casual than taxonomy, folksonomy is still effective based on this benefit;

“Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.” – O’Reilly

Hyperlinking was noted to be the foundation of the Internet although, hyperlinking has evolved to include the sharing of media and viral news and events. Twitter is the most commonly known ‘social’ feature called hashtagging, with the most currently popular are ‘trending’. Listed below is the current trending hashtags for New Zealand which ranges from Parliament through to a RnB artist’s assumed infidelity. People connect through following others and using the tagging, both used to find solidarity with others through shared interests.

Twitter trending

Pinterest structures its users information through them ‘pinning’ items.  The pinned pictures and other added media are added to created lists, thus, different linkages between content are formed. Furthermore, users select their interests they wish to see at home from different users, this is the one generated based on interests I had selected that come from users I did not select directly.



Both share the Web 2.0 characteristic that people create a shared Internet environment with the technology is used for this specific purpose, collaboration with other users. In terms of information architecture, the ‘language’ that people use to create their tags is a consideration that the creators cannot control but helps builds their selected ‘community’ which websites more usable as it is using the ‘language’ of its customers (uxbooth).

Software as a service


Google is a major provider of technologies that relate to Web 2.0 methodology of products being provided as a service as opposed to having customers pay for it.

Google PageRank is the main system behind the most popular search engine and is used to help calculate where to place websites in their ‘database’. Its clients are the web pages and the people who use the search engine. Customer queries into the search are returned with search results that have already been ranked.

Google search.png

SmashingMagazine had provided a summary into how Google’s page ranking works;

Google interprets a link from page A to page B as a vote, by page A, for page B. Google looks not only at the sheer volume of votes; among 100 other aspects it also analyzes the page that casts the vote. However, these aspects don’t count, when PageRank is calculated.

PR(A) = (1-d) + d(PR(t1)/C(t1) + … + PR(tn)/C(tn)). That’s the equation that calculates a page’s PageRank.

Not all links weight the same when it comes to PR.

If you had a web page with a PR8 and had 1 link on it, the site linked to would get a fair amount of PR value. But, if you had 100 links on that page, each individual link would only get a fraction of the value.

Google calculates pages PRs permanently, but we see the update once every few months (Google Toolbar).

Conversely, Google AdSense’s customers are both the paying advertisers and the people who advertise on Google’s behalf. Google has provided a page detailing why their advertising services for ‘web-page space’ is ‘superior’;

  • Different types of ads can be selected: Audience-appropriate and different media options
  • Business customers ‘bid’ for space: Maximising profits for the business and people who ‘own’ the web page
  • Control how text ads look: More interactivity for page users

Although businesses pay for their ads to be used by google, they are not paying for the software itself but the service of advertising facilitated by the software.

The design both for PageRank and AdSense are both extremely usable where blogoscoped have described google the search toolbar had been described as “uncluttered, smart, lightweight and commercial yet free”. I believe this is a guiding principle for the company as the design of their web pages and in extension, the information they present has accessibility for users on both sides in mind.

Value through customer use


Wikipedia has become the default place to look up general knowledge about nearly everything, ranging from an actor’s screen history through to knowledge about World War Two. Its success is a result of satisficing; it meets minimal requirements to achieve its goal with minimal resources.

Specifically, it is a free online encyclopedia, that allows any of its users to have the ability to add and edit a page of information. Although this has made it unreliable as a scholarly source Paul Graham states that because there is no ‘monetary’ cost and mistakes are minimised due to the sheer volume of efforts from ‘volunteers’ to make information reliable, has made it popular despite critics.

According to Wikipedia Statistics, there is a current rate of 10 edits per second on over five million articles and counting. Consumers can search for information by either using the provided search engine within Wikipedia or use a search engine where Wikipedia is strongly Search Engine Optimised. Usability-wise, information has been formatted for ease of use; categories are made, facts have to be cited and language can be technical but usually not overly so. However, mistakes have been known to still occur despite the volume of editing.

Final comments

As I complete one-quarter of this paper, I realised how much is involved in social media. It is one of these situations where we intuitively know how to use these tools, yet we do not look into the technical depths into why and how it works. Knowing what is involved in what makes social-media tick and the creators decisions in how to present information makes me appreciate it more.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s