Wednesday, May 31, 2006

Transhumanism Goes Mainstream

Transhumanism -- and the ethical questions surrounding it -- was the topic of a recent Stanford University bioethics conference.� Titled "Human Enhancement Technologies and Human Rights," the conference explored the rights of people to enhance their own bodies.�

The conference was notable because it took a step toward taking transhumanism out of the realm of theory and into a very practical line of thinking.� Indeed, the recent debates over steroid use in sports should alert us to the sorts of controversies that will accompany "biological contingencies" in the future:



[Bioethicist Anita] Silvers argues that the right not to be normal, is, in fact, the essence of freedom. Human beings, she argues, have always modified themselves, usually because we see the modifications as some kind of advantage. Banning it, as some have argued for, means forcing people to adhere to a government-imposed standard of normal.


The instinct to prevent people from making alterations to themselves worries British philosopher Andy Miah, a lecturer in media, bioethics and cyber culture at the University of Paisley in Scotland. “I explain it as a contempt for �Otherness.’ We seek to suppress people whom we feel are abnormal, mutants or monsters. Historically, societies have done this a lot. They continue to do it and I find it embarrassing.�




Such debate only scratches the surface of the transhumanist controversy.� Just as technologists worry about a "digital divide," so too will we have to come to terms with a "biological divide" that separates those who can afford body enhancements from those who cannot (in a sense, this exists already, as not everyone has the money to join a gym, get braces or cosmetic surgery).� But the types of transformations that transhumanists are considering may be far more drastic, leading to what are perhaps whole new classes of humans.� As he Barry Bonds controversy has demonstrated, that will be difficult to reconcile in a culture that values equality and fair play.



UPDATE: Transhumanism is Wikipedia's featured article for 6/2/06!

Source:� MSNBC

The Real HCM Maturity Model

I read an interesting article over on Learning Circuits entitled HCM Maturity Model. Two years ago, I might have simply nodded my head and moved on, but now I think that something very different is happening. First, their model is presented as:



Their definition of the "Knowledge Targeting Stage" includes:

By enabling line-of-business managers to collaborate with HR and training departments to tailor learning to their people, real business value begins to emerge as employees extend their knowledge beyond basic skill sets to specialized talents that have a direct impact on business performance.

The knowledge targeting stage of HCM provides HR and training leaders a unique opportunity to connect with business leaders, understand their challenges and deliver tangible value to them.


As you move up the maturity model you get into bigger and more centralized solutions with the ultimate solution being an all-you-can-eat tool. Oh, did I mention that the author comes from Saba? A good company, but an obvious bias.

Now here's the reality...



While I work a lot with LMS products (including Saba) and believe that they are really great for may things, I also find that they can easily become a barrier. For example, what if I just want to put up some quick-hit information. I'm not going to put it under the LMS. This is the same thing I talked about in my post: eLearning Technology: Tools for On-Demand Information - An LMS?

Tuesday, May 30, 2006

Connecting Little Guys to Really Big Guys

"Crowdsourcing" is the new buzzword to describe leveraging the Internet and the "wisdom of crowds" to solve problems and obtain information, whether via open source programming, file sharing or soliciting group input.  The idea, of course, isn't new, but who's using it is of interest. 





Pharmaceutical maker Eli Lilly funded InnoCentive’s launch in 2001 as a way to connect with brainpower outside the company – people who could help develop drugs and speed them to market. From the outset, InnoCentive threw open the doors to other firms eager to access the network’s trove of ad hoc experts. Companies like Boeing, DuPont, and Procter & Gamble now post their most ornery scientific problems on InnoCentive’s Web site; anyone on InnoCentive’s network can take a shot at cracking them.

The companies – or seekers, in InnoCentive parlance – pay solvers anywhere from $10,000 to $100,000 per solution. (They also pay InnoCentive a fee to participate.) Jill Panetta, InnoCentive’s chief scientific officer, says more than 30 percent of the problems posted on the site have been cracked, “which is 30 percent more than would have been solved using a traditional, in-house approach.”

The solvers are not who you might expect. Many are hobbyists working from their proverbial garage, like the University of Dallas undergrad who came up with a chemical to use in art restoration, or the Cary, North Carolina, patent lawyer who devised a novel way to mix large batches of chemical compounds.






A related concept is the iBridge Network, which aims to link universities up with entrepreneurs who can help bring technologies being developed in university labs to market.

When it works, crowdsourcing can be a win-win situation.  An individual or group looking for a solution can obtain one at relatively low cost, while individuals with knowledge can apply it to make money or advance their careers.  Naturally, the risk of abuse exists -- and that's where opportunity exists for developers seeking to design networking sites that are effective, efficient, and equitable.

Sources:  Wired, KurzweilAI.net, innovation.net



Connecting Little Guys to Really Big Guys

"Crowdsourcing" is the new buzzword to describe leveraging the Internet and the "wisdom of crowds" to solve problems and obtain information, whether via open source programming, file sharing or soliciting group input.  The idea, of course, isn't new, but who's using it is of interest. 





Pharmaceutical maker Eli Lilly funded InnoCentive’s launch in 2001 as a way to connect with brainpower outside the company – people who could help develop drugs and speed them to market. From the outset, InnoCentive threw open the doors to other firms eager to access the network’s trove of ad hoc experts. Companies like Boeing, DuPont, and Procter & Gamble now post their most ornery scientific problems on InnoCentive’s Web site; anyone on InnoCentive’s network can take a shot at cracking them.

The companies – or seekers, in InnoCentive parlance – pay solvers anywhere from $10,000 to $100,000 per solution. (They also pay InnoCentive a fee to participate.) Jill Panetta, InnoCentive’s chief scientific officer, says more than 30 percent of the problems posted on the site have been cracked, “which is 30 percent more than would have been solved using a traditional, in-house approach.”

The solvers are not who you might expect. Many are hobbyists working from their proverbial garage, like the University of Dallas undergrad who came up with a chemical to use in art restoration, or the Cary, North Carolina, patent lawyer who devised a novel way to mix large batches of chemical compounds.






When it works, crowdsourcing can be a win-win situation.  An individual or group looking for a solution can obtain one at relatively low cost, while individuals with knowledge can apply it to make money or advance their careers.  Naturally, the risk of abuse exists -- and that's where opportunity exists for developers seeking to design networking sites that are effective, efficient, and equitable.

Sources:  Wired, KurzweilAI.net



"Strange" Future Gadgets

A foldable DVD player?  AN LCD display that retains an image without a charge?  A transparent toaster??  Whether you think these product concepts are weird or simply good innovations, TecEBlog lists these and others among its "Top 10 Strangest Gadgets of the Future."  Some of which regular FutureWire readers will recall from past postings...



Are Young People More Politically Engaged than their Elders?

Conventional wisdom holds that young people don't pay much attention to politics or current events.  Yet a new study by the Pew Research Center for the People and the Press has found that Americans aged 18-29 (Gen-Y, Millennials, GenNext, DotNet, etc.) appear to be more politically active than previously believed.

The report cites Census data showing a sharp uptick in youth voter turnout between 2000 and 2004 (although they still trail their elders significantly), and evidence that more young people are active in fundraising and volunteering their services.  Furthermore, young people are more likely to hold liberal political views and favor Democratic candidates than their GenX and Boomer elders, though the ratio of Democrats/Republicans is about the same as with Boomers when they were that age.  That liberal viewpoint, however, is not wholly uniform; while young people are more likely than their elders to support gay marriage and hold a favorable view of government, they are also less supportive of abortion on demand.

With easy access to news and political discussion online and on 24-hour cable, there's no reason why today's youth shouldn't be more politically literate than their predecessors.  Plus, with many of their peers serving in Iraq and other flashpoints across the globe, young people have a stake in the decisions our elected officials make.  The true test of their political commitment though, will be whether it holds as they grow older, and whether their perspective change as they launch careers or raise families. 



Business 2.0 Profiles The Wireless Future

Business 2.0 takes a look at how the latest wireless technology will affect the way we work and interact through the Internet. 



The interactive article explores such tools as GPS tracking, VOIP, Internet TV, music recommendations, and mobile blogging.  It also looks ahead to mobile devices with full keyboards, touchscreens, and massive storage.

Source:  Emergic



Pay-By-The-Hour Computer Financing

Microsoft wants you to own a computer.  So much, in fact, that it is willing to let you pay for one by the hour.

Through its FlexGo plan, consumers can pay for half of a PC up front, then pay for usage by the hour.  After several hundred hours, the consumer owns the PC.  Such a plan allows consumers -- particularly those in developing countries -- to tailor their payments to what they can afford at the moment.

Such pay-as-you-go plans are becoming more prevalent as a way to make technology more affordable, much as easy credit did in the early 20th century... but hopefully without overwhelming consumers with debt.

Source:  Springwise



Friday, May 26, 2006

Google Embraces DIY Video Ads

As several startups explore the possibilities of using viral video in advertising, the 800-pound gorilla of online ads has awakened to the prospect.  Google will soon allow its advertisers to upload homemade video clips for their ads.

Anyone with a Google AdWords account can create and upload an ad, just so long as it is less than two minutes long.  The ad will then be displayed on blogs and websites that have related content, just like other Google ads.  Google has been testing the concept this past spring with major corporate advertisers, but it plans to roll it into production as early as today, some reports say.

As with web ad banners that evolved in the late '90s, video ads will go through some growing pains as advertisers learn what resonates with viewers and, most importantly, motivates them to buy their products or services.  This period of experimentation will be a lot like the first few episodes of an American Idol season -- brilliance juxtaposed against... well, you know...

Source:  Marketwatch



Putting Your Best Foot Backward

At first glance, running backward seems about as good an idea as running with scissors.  But the practice -- also known as retro-running -- is gaining adherents who claim it improves balance and peripheral vision, burns more calories than regular running, tones more body parts, and can reduce stress on joints.



Hardcore retro-runners have competed in races, and even marathons (the world record for a retro-run marathon is 3 hours, 43 minutes, set by a Chinese runner in 2004).  But retro-running also has its obvious hazards.  Practitioners all have stories of stepping into potholes or running into parked cars, and recommend that beginners choose quiet, open areas such as an empty track.

Not brave enough to try retro-running on your regular jogging route?  Frankly, we can't blame you.  Many treadmills and elliptical trainers operate in reverse mode, allowing you to try retro-running for yourself in a safe environment.

Source:  CNN.com



More Tattle-Tale Toilets

Toilets -- urinals in particular -- are getting quite interactive these days.  Last year we profiled a conceptual urinal that screens users for STDs.  Now, bars in New York's Nassau County are piloting urinal drain covers that play messages discouraging patrons from drinking too much.

The Wizmark Urinal Communicator (gotta love that name!) is paid for with DWI fines and is being distributed to bars for free.  When the device senses a "visitor" (their word) nearby, it plays a 15-second message:  "Hey you. Yea You, having a few drinks? Then listen up!  Think you had one to many then it's time to call a cab or call a sober friend for a ride home. It sure is safer and a hell of a lot cheaper than a DWI. Make the smart choice tonight, don't drink and drive."

Of course, anything that discourages overindulgence and potentially saves lives is worthwhile.  But one must wonder about the next logical steps for such devices, such as detecting the presence of substances in urine and automatically reporting them.

Sources:  WCBS Newsradio 880, Techdirt



All the Cells that are Fit to Print

Gabor Forgacs, a biophysicist at the University of Missouri in Columbia, is pioneering a technique for arranging human tissue that he calls "bioprinting."  One day, bioprinting could allow tissue engineers to construct portions of artificial organs.

Bioprinting involves layering clumps of "bioink" to create a three-dimensional structure.  So far, Forgacs has succeeded in creating a cluster of chicken heart cells that beat synchronously.

This is not the first time that tissue engineers have tried to build tissue structures, but bioprinting promises to be an economical process.

Source:  New Scientist



Thursday, May 25, 2006

Shift in Blended Learning - Example of Melding of Training and Support

In a previous post Shift in eLearning from Pure Courseware towards Reference Hybrids, I talked about the shift from Courseware towards other kinds of Blended Learning solutions with a greater emphasis in information sources.

As an example from the software training world, what we are seeing more an more is a melding of Training and Support materials. In other words, our blended learning model often used to look like:




now it looks like:


Note: the midpoint represents the launch of the system. The key differences here are:

  1. Support materials such as help, cheat sheets and manuals are seamlessly integrated with training and used prior to actual system launch. The training materials are also seamlessly integrated into with support materials so that they can be easily accessed at the time of use.
  2. "Training" is handled over time with a series of learning events that includes learning events scheduled post-launch and that are often more like office hours. This spacing is known to have much better impact.


Keywords: eLearning Trends

Shift in eLearning from Pure Courseware towards Reference Hybrids

Just when you've made the transition from the prior generation of CBT authoring tools (e.g., Authorware, Toolbook) to the new generation of WBT authoring tools (e.g., Captivate, Lectora), it looks like things are slowly shifting again.
The shift I'm seeing is away from the design of pure "courseware" solutions
and much more to "reference hybrid" solutions.

To explain this, I need to step back and deal with the fact that terminology around eLearning Patterns is problematic.

In my mind, "courseware" is interactive (to some level) instruction run asynchronously. It is created via an Authoring Tool or an Learning Content Management System. Often there's period quizzing to test understanding. It's designed to hit particular instructional objectives. It's the stuff you see demonstrated at every Training conference over the past ten years. Oh, and it almost definitely has a NEXT BUTTON.

"Reference" is static content - meaning no interaction other than allowing the user to link from page-to-page and to search. It is asynchronous. It is normally a series of web pages, but can be PDF or other document types. It can be created using Wiki software, a content management system, web editing software or even Microsoft Word stored as HTML. It's designed to provide either real-time support for work tasks or near real-time support for look up. Often they are designed based around particular job functions and tasks to provide good on-the-job support. It almost certainly does not have a next button and should have search. You probably don't see many demonstrations of these kinds of solutions, because they aren't sexy.

I realize that these terms are vague, so let me go see what other people have to say. If you look at various eLearning Glossaries: Wikipedia eLearning Glossary, WorldWideLearn eLearning Glossary, Cybermedia Creations eLearning Glossary, you'll find that there are hopeless definitions of "courseware" and no definition of "reference." Reference sometimes comes out as "job aids" or "online support" or "online help" or various other things. Each of these other terms in slightly more specific than "reference" as they generally imply a bit more about the specific structure of the content. Thus, "reference" to me is a good umbrella term.

By the way, if you can help me ... Am I missing an alterative term for "reference?" Are we just calling these "web pages?"

I did try another avenue to find better definitions. I went over to Brandon Hall's Awards Site. He has awards for "custom content" organized by the type of function they were supporting, e.g., sales. I would think that "content" is an inclusive term for courseware and reference. However, if you look at the judging criteria the first question is: "How engaging is this entry?" So, I've got to assume they really are looking for courseware and not for reference material (which is inherently less engaging - and in fact you would claim that you don't want it to be engaging, you want it to be quick and to the point).

The other categories for Learning Technology. The sub-categories here are:

  • Course Development Tools
  • Software and System Simulation Development Tools
  • Soft Skills and Technical Simulation Development Tools
  • Tests or Instructional Games Creation Tools
  • Rapid Content Creation Tools
  • Live E-Learning/Virtual Classroom Technology
  • Just-in-Time Learning Technology
  • Innovations in Learner Management Technology
  • Innovations in Learning Content Creation and Learning Content Management Technology
  • Open

None of this makes me think about "reference," but maybe it would be included in either "Rapid Content Creation" or "Just-in-Time."

Okay, now that I'm done ranting about terminology, here's the real point ...

We are seeing a significant shift in development away from mostly creating courseware to creating more-and-more reference materials designed to be just-in-time support.

Even more so, we are seeing a shift towards Hybrid Reference and Courseware combinations where the Courseware is embedded within the Reference. So, if you don't quite understand the concept or you want to make sure you provide a nice introduction, you put that embedded within the web pages.

As an example of this, we've created several hybrid reference/courseware solutions that are designed to both introduce and support the use of software. Traditionally, we would have built courses in something like Captivate and pointed users to go take these courses first. We would have separately created a "support" site that would have a FAQ and help on various tasks.

In the hybrid solution, we created the support site as the first element you go to and put a prominent "First Time User" link on the home page. This page takes them to instructions on how to get up and going. Most of the content is presented as static web pages that tell how to perform particular tasks, but some of the pages contain embedded Captivate movies to demonstrate or simulate use of the system.

This design has given us several advantages:
  • End-users can get started with the application quickly and receive incremental help on the use of the system as they need it. We've eliminated most up-front training.
  • End-users only see one solution that provides "help on using the application" as opposed to seeing "training" and "support" separately.
  • It costs less to produce because there's greater content sharing between training and support materials and because we build more of the content as reference which costs less.

I'll be curious to hear if other people are seeing a similar shift in what they are building.



Keywords: eLearning Trends, eLearning Resources

Wednesday, May 24, 2006

Surveys in eLearning

Through two recent experiences I've come to realize that many people in eLearning are not using Survey tools nearly as much as I would have thought.

For my recent Collaborative Learning eLearning 2.0 Class, I used a SurveyMonkey survey to ask the class members about their background, interests and availability. You can see the survey results on the Course Wiki.

SurveyMonkey is very easy to use. It's free up to a limit. It will even help send the surveys, track who's taken them and nag people who have not completed the survey.

And, of course, there are lots of other tools out there that are equally easy to use and equally free. I've used Zoomerang before and it's great as well.

So, why am I hearing that most people aren't aware of these tools or aren't using them for pre & post course surveys?

Keywords: eLearning Resources

Email, Knowledge/Content Management - Email as a Future Application Interface

James Robertson's excellent Column Two blog pointed me to an interesting article by Seth Gottlieb - Email and Content Management which provides some practical suggestions about how to move from email based content management towards better mechanisms.

While I agree with Seth's main contention that email, especially email with attachments, makes content management much harder, I actually think that Seth is swimming against a very, very strong current and is probably going to get sucked into the ocean shortly. He may know this since he points us to another article that explains The Good In Email (or Why Email Is Still The Most Adopted Collaboration Tool).

And, I personally believe that email is going to become more and more the "front end" of many of our applications. Many of the systems we build these days are workflow applications that often email the people involved to notify them or even allow them to respond. This is our way to "get in front of the user." And it works extremely well. And most every application is starting to do this. As we begin to get more sophisticated about workflow, we are going to see this increase.

So, while I would love to believe that much of the current communication that occurs through email will be migrated to other kinds of vehicles with more appropriate persistence and searchability characteristics (e.g., wikis), I think that most users are not heading that direction today and what is a more likely trend is to have email become more integrated so that it acts seamlessly with our CM/KM solutions.

One Third of Adults "Not Learning" - BBC - We Need a Better Definition of Learning

I found this via Donald Clark's blog. It pointed to an article by the BBC - Third of adults 'not learning'

The study defines learning as being not only taking formal courses but also practising, studying or reading to develop skills, knowledge, abilities or understanding of something.

This can even be part-time at home. It does not have to have been finished or to have led to a qualification.

The survey found that 20% of adults said they were currently learning, with more than 42% having done so in the past three years.


What I really found fascinating was how the study authors defined learning and then how it was interpreted by people who answered.

  • They left out any mention of TV or the radio and the word "listening or watching." I guess people don't learn anything from watching or listening to BBC programs.
  • They used the word "to" in their definition which I would interpret to mean that you are only including "intentional learning" not including "unexpected learning." In other words, if I learned something by reading, but I hadn't set out to learn that thing, then it really doesn't qualify as "learning" in terms of how the question was phrased.

I guess I believe that even someone who might not sit down "to develop skills, knowledge, abilities or understanding of something" might still watch the occasional Sports Center and find out a bit more about how the Heat's offense.

My point is that it's somewhat dangerous for all of us to call ourselves "learning professionals" and to work in this field if the common definition of learning excludes informal and unexpected learning.

Tuesday, May 23, 2006

Tags, Search Effectiveness, Personal Benefits

A couple of interesting recent posts and my experience in my Collaborative Learning Class has me thinking about the usefulness of Tags both personally and in workgroups.

From Bill Ives - Where Tagging Works and Where Tagging Doesn’t Work – Search Engine Lowdown
I guess I tend to agree with Danny Sulliivan about the tagging and search but that is not the original intention of tagging. If I want to search on a key word, I will still go to Google as the most efficient way. If I have the time to go exploring through multiple links and see the interrelations between key words, I might go to del.icio.us. However, if I want to set up a way to store and share links on a particular topic, I will use del.icio.us which I have done already in co-authoring an article.
Interesting, Bill points to a search on Google for "Web 2.0" and he 75,900,000 hits with the famous OÂ’Reilly article at the top of the list - which is a pretty dang good result and makes sense given Google's in-bound link based algorithm. If you do a similar search on del.icio.us, you first realize (as Bill found) that tags cannot have spaces, so you actually need to look for "web2.0" - you can see what you get at: http://del.icio.us/popular/web2.0

I definitely don't think the results are nearly as good as what you get in Google. But look on the right side to find "related tags" that are how you can find things that are related.

I completely agree with Bill's assessment, del.icio.us is more useful if you are trying to find related terms to search against, but the quality level of results doesn't seem to be there.

A closely related great series of articles can be found at What are the Personal Benefits of Tagging? -
One thing that the most useful of these reasons all have in common is that
they allow the user to express tags using personal vocabulary.


I personally have found that because I've switched to Yahoo MyWeb that has full-text search across my bookmarked pages, I've come to use tags mostly to represent two things:

  • Actions - I tag items with "blogthis" if I plan to come back an write it up in a blog.
  • Sharing - I tag items that I plan to share with a specific tag so that others in my group can find it.

So for me, it's not quite the folksonomy effect that most people talk about, but based on these articles, I'm starting to think that's what other people are finding as well.

Beta Program, Email List, Acquisition - A Case Study in What Not to Do

Let me set the context and then let excerpts from the emails tell the rest of the story. A company with a very good product (the leader in its category) asked its leading users to participate in a Beta program right around the time they were acquired. The acquired company established a listserve mailing list and put all of their top users on the list...Here a few of the emails (there are about 80 in total), but you'll get the idea:

Dec 8, 2005
Once the beta is ready I am sure they'll let us know first... No point asking every day.
Dec 8, 2005 - From official at company that was acquired
You will receive an email once we start the beta. Until then, I'd like to ask you not to post messages on this beta list. Thanks in advance for your understanding. The Product Team
Jan 6, 2006
Any word on the Beta?
Apr 11, 2006
Any word on the Beta?
Apr 11, 2006
I received an email about registering for the Beta from the new company. When I responded to be sure that they weren't putting me in twice, I was told that they had already filled out the group! I had responded immediately upon receiving the notice!
Apr 11, 2006
I had the same experience. Probably the new company has a different way to select beta testers...
May 22, 2006
On the new company's blog, it says that the product is in full beta, and that they have already received feedback. Yet, I have not seen anything about the product in over a month. Am I missing something? Thanks.
May 22, 2006
I believe that the prior Beta program is gone. The new company runs it under a different program called 'Prerelease', not beta. I am not sure how to apply, the process is much different now.
May 22, 2006
I don't know what's more pathetic - us all waiting for an email that apparently wasn't ever going to come for the start of this beta, or having to read 3rd-hand that we were led to believe we were in the beta testing program and when they changed their program the company, in effect through inaction, 'screw the old beta testers, no need for us to at least let them know that they're no longer beta testers!'. If anyone from the new company is reading this, and I doubt you are, next time show a little common courtesy for your customers.
May 23, 2006
I am looking for the message they are trying to send. When a company asks people to participate in a beta test, are they not identifying the group of people who are most passionate about your product and who are early adopters? So if they then send them the (unspoken) message that you do not need them... what exactly is the company trying to say to these passionate early adopters?

It is an interesting strategy.
Discussion of alternative products starts on the list. List of 10 competitors including open source are being discussed.

May 23, 2006
Abandoning their products is pretty easy. There are plenty of good alternatives out there, and they all output the same types of files....

And this is just TOO FUNNY...
May 23, 2006 -
From: XXX@thecompany.com
To: a member of the list
Sent: Tuesday, May 23, 2006 11:24 AM
Subject: RE: Beta program

Would you be so kind and send this message to the list (I am not
authorized to post
) - Hopefully, I'll find out what's going on shortly.

I don't yet know how this story will end. Will the users all find alternative products? Will the new company figure out how to post on their own email list?

Monday, May 22, 2006

Web 2.0 - Mainstream Term

These are somewhat telling about how mainstream Web 2.0 has become.

Over on Read/Write Web, and interesting article by Richard MacManus - Coming to Terms with Web 2.0
Then on 18 December 2005 I made the infamous declaration that "Web 2.0 is dead. R.I.P.". Ever wish you hadn't pressed the 'publish' button?

So what's 2006 brought? Believe it or not, I think it's brought acceptance of the term 'Web 2.0'. That's actually caught me by surprise - I got it wrong. Web 2.0 hasn't died, it's actually morphed into a mainstream term that Gartner and IBM use.

Gartner Says Web 2.0 Offers Many Opportunities for Growth, But Few Enterprises Will Immediately Adopt All Aspects Necessary for Significant Business Impact
While it is straightforward to add specific technologies, such as Ajax or RSS to products, platforms and applications, it is more difficult to add a social dimension.

By 2008, the majority of Global 1000 companies will quickly adopt several technology-related aspects of Web 2.0, but will be slow to adopt the aspects of Web 2.0 that have a social dimension, and the result will be a slow impact on business, according to Gartner, Inc.

Keywords: eLearning 2.0, Web 2.0

Search - Implications on Knowledge Work

This post was sparked by a couple of recent articles:

Babson Knowledge: How Google Plans to Change the Scope of Googling (And Why Information and Knowledge Workers Should Care).

InfoWorld: Reinventing the Intranet

These articles point out what we already know:
It is generally easier to find stuff that is in the mass of public information than it is to find stuff inside our own corporations.
While there is some defense of Corporate IT in that it is harder to get at information that is stored in a wide variety of systems, e.g., ERP, CRM, etc. It still is surprising that we are not seeing as rapid adoption of search technologies inside the firewall. But, likely, you are at fault as well, after all,
What desktop search are you using?
If the answer is "none" ... then get with it. Desktop search via tools such as X1 will change your life. You will find that you don't spend nearly the time worrying about categorizing your own content into folders. I used to spend a lot of time worrying about rules in my email. Not anymore. Just search and you will find.

Soon, these tools will be expanding out to your internal network, your internal systems and basically every piece of information.

Interestingly, I think the algorithms that Google relies on for searching the public web (based on incoming links) will not work nearly as well as older algorithms based on frequency, semantic interpretation and other techniques.

Keywords: eLearning Trends, eLearning 2.0, Web 2.0

Firewalls and Security in Software as a Service

One of the interesting outcomes of my recent course - Collaborative Learning Using Web 2.0 Tools - A Summary - was general consensus around:
  1. Software as a Service is Great for Learning Professionals inside Corporations
  2. Firewall restrictions still pose a problem for SOME services
  3. Security is a concern, but generally should not stop use

The reason that Software as a Service is so attractive is that it is often hard to get Corporate IT to spend time on getting even simple software packages set-up and even harder to get them to agree to support these packages. Thus, while we are excited about wikis, blogs, discussion groups, etc., the practical reality is that, unless they already exist somewhere and we can piggy-back on those implementations, we are not going to be able to get them implemented by Corporate IT. Thus, there is real attraction in being able to sign up for hosted services that provide these tools without Corporate IT being involved.

For us to be successful doing this, we first need to make sure that the system will work with whatever firewall restrictions exist. For example, in our course, we found that Yahoo Groups were restricted in some corporate environments. Elluminate did not work through several firewalls, so we had to switch to WebEx. The Yahoo Toolbar (for MyWeb) couldn’t be installed on locked desktops. Instead, we should have used Del.icio.us. We had no trouble with our PBWiki. The good news is that there are lots of these services in most categories, and thus, the best advice is:

Test any service you are thinking of using in different locations and desktops to make sure that you are able to use the service effectively given firewall restrictions.

Do not believe any vendor claim that "it works through firewalls" because a firewall can be configured to stop anything it wants. That's its job.

The other big hurdle is the question of security. What's your exposure by having your content at a hosted location. The first part of the answer is whether outside parties (not you or the host) can hack into the system and get at your content. Generally, I think you will find that hosts provide fairly reasonable control, but you will want to check into their security approach.

The second part is that there is some set of administrators who provide the hosting who will have the ability to get in and see your content. The host may make it difficult for the administrator to get in there, but often its not that difficult. Really, this is the same situation as what you face in internal software with some set of Corporate IT staff having access to content (likely including email). In the case of hosted solutions, the added "risk" is that the administrators are employees of an outside company. On the other hand, you probably have better recourse against the host provider if the administrator does something wrong than you would against your own employee.

The security issue not new. There are likely lots of content types that get stored externally by your oganization. They might be using Salesforce.com as a CRM. They might be using an email system that handles Spam filtering and archiving. Chances are, the content you are putting up in your learning solution is far less of a risk than what is already getting stored out there. Which brings us to the first defense ... while the risk is probably low that you will actually have information leak out:

Try to limit content to information that would cause little damage to the
company if it were made public.

What if you need to work with content that is confidential and would potentially represent a risk? Well then you are going to need to go through the same protocols you would use internally to vet the system and likely you will again need to involve your IT staff because they are likely the ones who make these determinations. This will slow down your implementation time, but is not nearly the hurdle you have trying to bring software in-house.

Will they derail your process? If you look out at what's happening, you find mixed reviews. In an eWeek article: Security May Dog Software as a Service they provide a mixed answer:

the biggest challenge for companies such as Microsoft that see their future in on-demand software may be getting customers to understand and be comfortable
with the model.

And, the current state of network and application security at most companies is poor enough to make it hard to imagine on-demand deployments being any worse, experts agree.


You are still on the hook according to Software as a Service and Security:
A company must show due diligence in its relationships with third-party providers to ensure that those providers maintain and comply with U.S. and international regulations to which that company is subject. Under such regulations, it is the responsibility of the company—not the software as a service provider—to protect sensitive information.

The advice from an article in CFO Magazine:
Data security. Although SaaS vendors invariably emphasize the resources they devote to security, many customers remain uncomfortable with their employee and customer data flying over the Internet, not to mention potentially residing on the same data-center server as their rivals'. "Look at security. Do the due diligence. Make sure the vendor has the right premises and that protecting your data is its top concern," counsels David Brooks, director of CRM at Magma Design. Juniper Networks CIO Kim Perdikou insists on modifying SaaS contracts so that she has the right to do periodic security audits.

What's the bottom line? Chances are that you are not going to run into much of an issue. Try to keep the content to things where there is low risk. And where you have sensitive data, bring in IT staff to audit the security, bless the vendor(s), and check the protection in the contracts. It's still better than having to install software behind the firewall.

In talking with a lot of different CTOs from software development companies in Southern California it appears this is the way forward.

Keywords: eLearning 2.0, Web 2.0

[BREAKING NEWS] NOAA Predicts 10 Hurricanes for '06 Season

The National Oceanic and Atmospheric Administration (NOAA) has forecast 10 hurricanes for the 2006 Atlantic storm season (which begins June 1), with four to six of those being "major" storms (Category 3 or higher) and an additional three to six named storms that don't reach hurricane strength.

The prediction is for a far less active hurricane season than last year, which spawned an unprecedented 28 named storms and 15 hurricanes (the NOAA had predicted that last year would be a busy season, but not that busy).  That, of course, included the colossally devastating storms Katrina and Rita, as well as Wilma, which at one point became the most powerful hurricane ever recorded.

Scientists have also warned that hurricanes could track farther north this year, threatening the Mid-Atlantic states and possibly even New England.  Overall, meteorologists are noting an overall increase in hurricane activity since 1995; most attribute this to a natural cycle that can run from 15 to 40 years, though some say that global warming is also a contributing factor.

Source:  MSNBC

Tags: ,



Hyper-Local Weather

Remember the old George Carlin bit in which he wondered why TV weather people quoted weather reports from the airport when, in fact, nobody lives at the airport?  For everyone who feels that weather forecasts are irrelevant comes "hyper-local" weather.

Accu-Weather, NBC Weather Plus and The Weather Channel are all perfecting technology that will allow them to deliver granular weather reports, specific to areas as close as a mile apart.  High Resolution Aggregated Data (HiRAD), when combined with radar and satellite imagery and delivered through the Web and digital cable, could effectively allow the viewer to see immediate and long-range forecasts for his or her own neighborhood or street.  This level of detail would be enormously helpful -- even a lifesaver -- in the case of powerful yet highly localized and fast-moving events such as tornadoes or thunderstorms.

Sources:  Broadcasting & Cable, Lost Remote

Tags: ,



A Future for Books and Paper

It's always been assumed that books and paper would be among the first victims of the Information Revolution (remember that phrase?).  But, as we've seen, it hasn't quite worked out that way.  To the contrary, argues anthropologist Alex Golub, the printed word will almost surely remain a part of our future:





It’s true that there is a lot of stuff you can do with PDFs and the Web that you can’t do with paper, but too often people take this to mean that digital resources “have features” or “are usable” while paper is just, you know, paper. But this is not correct — paper (like any information technology) has its own unique form of usability just as digital resources have theirs. Our current students are unused to paper and attribute the frustration they feel when they use it as a mere lack of usability when in fact they simply haven’t figured out how it works. Older scholars, meanwhile, tend to forget about paper’s unique utility because using it has simply become second nature to them.



Some of the features of paper are well known: Reading more than three pages of text on a screen makes your eyes bleed, but I can read paper for hours. You can underline, highlight, and annotate paper in a way that is still impossible with Web pages. And, of course, in the anarchy after The Big Electromagnetic Pulse the PDFs will be wiped clean off my hard drive but I will still be able to barter my hard copy of Durkheim’s Elementary Forms of the Religious Life for food and bullets.



But my passion for paper is about more than preserving the sociological canon in a post-apocalyptic future. Using paper is embodied in a way that using digital resources are not. Paper has a corporeality that digital texts do not. For instance, have you ever tried to find a quote in a book and been unable to remember whether it was on the left or right hand side of the page? This just a trivial example of way in which paper’s physicality is the origin of its utility.





Golub goes on to praise the librarian's and bookstore's role in "filtering" and organizing content, and even the decorative value of books in the home.  One suspects that Golub is not a voice in the wilderness, that he speaks for many who feel the same way.

Sources: Inside Higher Ed, Question Technology


Friday, May 19, 2006

Class of '06 is Lukewarm About the Hot Job Market

A reasonably strong economy combined with the first rumblings of Baby Boomer retirements is fueling the hottest job market for graduating college students in years.  According to one survey, 60% of employers surveyed said they plan to hire more college grads than last year.  The market is reportedly so hot that even liberal arts majors are in demand!  (Seriously, employers are realizing the benefit of hiring employees with diverse backgrounds)

But as we've noted before, today's young people have a healthy dose of cynicism when considering corporate careers.  Not only have today's kids seen their elders burned by downsizing and witnessed the Enron debacle, but thanks to the Internet, they have more information at their fingertips than any generation before:



[Graduating students] plot their careers like chess masters. They ask pointed questions about company ethics and finances. Parents are more involved too, quizzing recruiters and in rare cases, even sitting in on job interviews.

"Kids today are wired. They can find out almost anything in seconds about a company and the questions recruiters ask," said Mark Mehler, co-founder of CareerXroads, a New Jersey consulting firm. "If the picture you paint is not reality, this generation will quit on a dime."

Recruiter W. Stanton Smith took note of such brashness a few years ago, when promising young hires quit. "People weren't just saluting and taking orders anymore," he recalled.


In response, businesses in typically stodgy fields such as accounting are highlighting their employee-friendly corporate cultures and their positions on socially conscious issues.

The involvement of parents is another factor that sets today's youth apart from their predecessors.  With today's narrowing generation gap, kids and their parents are closer, and kids want their parents to take an active role in their life decisions.  Whereas I would have died if my parents had sat in on any of my job interviews... that is, if the interviewer didn't laugh me out of the room first.

Source:  Ypulse, LA Times

Tags: , ,



"LifeStraw" Brings Drinkable Water to the Developing World

From the innovations-so-obvious-it's-amazing-no-one-thought-of-them-before department comes the LifeStraw, a plastic tube with an iodine/carbon filter designed to allow people to drink water safely.



Created by a Danish inventor, the LifeStraw can be used in developing countries and disaster zones where potable water is rare. To use, the drinker simply sucks through it; the water passes through the filter, which kills bacteria, and blocks parasites and other contaminants. The list price is around $3.50 (though considering that many in the developing world subsist on less than a dollar a day, the cost would have to be subsidized somehow). Each filter could last from six months to a year.

Many futurists fear that the worldwide lack of fresh water will be one of the great global crises in the coming years. Already, an estimated 6,000 people die of water-borne diseases each day, and many throughout the world travel miles on foot in the search for fresh water.

Source: BBC

Tags: , ,

Elves, Measuring Results and Informal Learning

Brent and I have been having a nice blog discussion. Our previous posts discuss what should be measured: Intermediate Factors in Learning, Intermediate Factors - Impact Many Measure One. And we finally seem to be agreeing with one exception. And this exception relates closely to my earlier concern eLearning Technology: Informal Learning is Too Important to Leave to Chance.

This discussion makes me think back to a question that I used to ask in my Computer Science Project class (based on something I read, but I now forget who it was - Fred Brooks maybe):
If an elf appeared and offer to give you a program that met your spec, how happy would you be?

After the initial jubilation wears off, the class realizes that there is some real concern on whether the program actually works as intended and without having insight into the guts of what's going on, it feels very uncomfortable. How do I know it works? How do I know what it's limitations might be?

It turns out that you really want more than just a program. You want one that you know how and why it works.

So, back to learning and measuring results. I actually don't want "just the outputs." I want to know how and why it's working. In the case of improving customer satisfaction based on knowledge of Store Layout and Product Knowledge, I want to know whether we've increased their knowledge and whether customer satisfaction has improved. While my client only cares about customer satisfaction, if it remains the same while knowledge increases, this is an important data point. It tells us about the system.

So, on to informal learning. Brent said in From Product Focus to Audience Focus:

The process is continuous and if our “training solution” is organic, dynamic, and flexible, it is very difficult to measure using the current method of measuring learning products. My point is “who cares”. If we have set up environments that help people collaborate, and support their informal learning, we should see output improvements.

And this is our only point of remaining disagreement. I would be much more comfortable if you can explain the internals of this system, how you know it works, when it will work and when it won't. As I said in Intermediate Factors in Learning ...
If you create an "organic, dynamic, flexible" learning solution but can't explain how it impacts the end numbers, then: (a) you won't get credit, (b) you won't know if you can repeat it successfully, and (c) you won't know if its really working.

Keywords: eLearning Trends, Informal Learning

Optical Processing Could Increase Internet Speeds by 1000X

An Australian research consortium is reportedly developing a photonic chip that would process digital signals much faster than conventional silicon electronics

The chip would work by routing light signals, controlling the frequency of light pulses, and regulating the behavior of the light by changing its color.  Like silicon, optical circuits could be printed, leading to cost-effective mass production.

By converting much of the Internet's networking to fiber optics, such circuitry could eliminate lag times and create an 1000-fold increase in the Net's overall speed.  With such optical switching in place, even the largest downloads could be completed within a fraction of a second. 

Of course, this would depend on most all of the Internet's components being converted to optical circuitry -- a daunting task even if the technology were immediately available.  The research team, though, hopes to have a functioning optical switch ready within the coming months.

Source:  Sharkride

Tags: , , ,



Forbes Summarizes "Future in Review"

Forbes has a summary of the recent "Future in Review" meeting held last week at the Hotel del Coronado in San Diego.  In case you missed the $4,000-per-head event in which you could have rubbed shoulders with key future-focused business leaders and consultants, the conversation reportedly ranged from the functional to the fantastic, covering topics such as space travel (SpaceX founder Elon Musk outlined a plan to travel to Mars before 2020), "flex" cars that run on multiple fuel types, and Google's philanthropic efforts to fight disease in the developing world.

Tags: ,



Thursday, May 18, 2006

Is Cognitive Computing Poised to be an "Overnight Success"?

Can a computer chip process information the same way they human brain does? If so, how far away are we from such "cognitive computing"? It all depends on who you ask.

Palm Computing co-founder Jeff Hawkins says, "We've been trying to do this for 50 to 60 years. Artificial intelligence, fuzzy logic, neural networks, the Fifth Generation project -- they've all had big moments in the sun. The reality is we've not had much success." But he's not as pessimistic as he sounds, as he has founded a company called Numenta to build a computer memory platform that mimics human thought processes.

Others, citing rapid advances in computing power and efficiency, believe we may be much closer to major breakthroughs. Says James Albus, a senior fellow and founder of the Intelligent Systems Division of the National Institute of Standards and Technology, "We are at a tipping point... analogous to where nuclear physics was in 1905. The technology is emerging to conduct definitive experiments. The neurosciences have developed a good idea of computation and representation of the brain." The most advanced supercomputers, Albus notes, are approaching the computational speed of the human brain.

The blog Responsible Nanotechnology cites recent talks by futurists Ray Kurzweil and Eliezer Yudkowsky illustrating how major technologies such as this can appear to be going nowhere for long periods (even though work is underway), followed by an explosion in innovation and productivity, taking most everybody by surprise. It's analogous to the groundbreaking actor or musician who becomes an "overnight success" after years of hard work, practice and dashed hopes.

What excites computer scientists about cognitive computing is that it's the process that allows people to perform abstract thinking, learn, recognize patterns, and navigate spaces. It's what makes us smart, as well as giving us our personality and creativity. Besides fulfilling the promise of genuine artificial intelligence, cognitive computing may also allow us to repair certain types of brain damage and degeneration with a "bionic brain."

RELATED: To foster the development of artificial intelligence, the European Commission's Future and Emerging Technologies initiative has created a "virtual community" that will allow software to generate avatars that could interact and learn. Aside from helping researchers learn more about how AI cooperates and manages conflict, the environment will also help sociologists model behaviors in crisis environments.

Sources: ZDNet Australia, Responsible Nanotechnology

Tags: ,

Intermediate Factors in Learning

In Measure Intermediate or Final Factors, Brent responded to my posting: Technology: Intermediate Factors in Learning.

Brent and I (and Jay Cross and lots of others) agree that measures of butts in seats, number of completions, etc. are generally not that useful in telling what impact we are having on what matters to the business.

Where Brent and I really seem to disagree, and I've seen this other places is the importance of Intermediate Factors.

The key to almost every engagement for me is understanding how human performance drives the business. Yes, the client hires me to improve business results which is always ultimately around Revenue or Cost. Most of the time the client already measures intermediate factors such as Customer Satisfaction, Loyalty, Quality, etc. All of these are known to have impact on Revenue and Cost.

Most of my clients have some understanding of how human performance impacts these measures as well, but most often they have not fleshed this out to the level that is needed. Thus, if they tell me that they care about Revenue and they really want to look at improving Customer Satisfaction because that is the biggest predictor of Revenue. Then, I will drill down to what impacts Customer Satisfaction. Some aspects I won't be able to affect, but other aspects are within our ability to influence. So, we will continue down the path.

If they are a more sophisticated organization and have Customer Satisfaction Surveys, then likely we'll already have some very interesting intermediate factors defined. In home improvement retail, associates ability to tell customers where particular products are and getting them to the product is a big factor in customer satisfaction. Together with my client I will now agree that what I'm really trying to do is to impact these further intermediate measures such as the results of the Cust Sat Surveys around Associate Help in Finding Products. Likely I will further drill down on this to break it into Knowledge and Skill Components such as (a) Knowledge of Store Layout, (b) Knowledge of Product Categories/Types, (c) Handling of Customer Questions around Product Location.

I then do the most important thing with my client. I get agreement that what they are hiring me for is not to increase Revenue (end Factor), but rather to provide mechanisms that will impact these Knowledge and Skill Components in a way that improves scores on the particular Question (all are intermediate factors). In fact, when I work with the store managers as part of the intervention, I will make sure that they understand the relationships here and understand why these intermediate factors are important. Ultimately, the Causal Relationship that we believe will result in:

  • If we improve Knowledge of Store Layout, Product Categories/Types, and How to Handle Customer Questions,
  • we will improve the customer experience (as exhibit in how they rate us on that questions),
  • which will improve customer satistfaction,
  • which will improve revenue.

Much of what we do in Learning is making sure we understand these Intermediate Factors.

Keywords: eLearning Trends, Informal Learning

Wednesday, May 17, 2006

Tracking Without an LMS

Based on an earlier post - Tools for On-Demand Information - An LMS?, I received a couple of questions around tracking.

Then today, I saw a post on TrDev about tracking without an LMS and thought I should maybe clarify what I often see as the choices around tracking:

a. Click tracking
b. Custom tracking
c. LMS tracking

Click Tracking

In Click Tracking, you rely on looking at logs of what pages have been clicked on and get reports via log file analysis (web analytics) tools such as WebTrends. These tools will tell you:
  • How many users have visited each page (HTML page)
  • When users are visiting
  • How long users stay

This is very standard technology that likely your IT shop can provide for you. If they cannot, then you can do what I've done on this blog and embed SiteMeter onto all of your web pages and it will give you similar kinds of reports. In fact, if you go to the link at the bottom of my blog (you can't see this in the RSS feed), you can see what traffic I get.

What you don't get with Click Tracking (without using some tricks) is the ability to see what any individual user did on the system. Thus, you couldn't tell if John or Sue finished the course. So, you have to answer the question:

Do I need to know if people are completing the course?

If the answer is no, then the other aspect to this solution is to create your course in a way that is easily tracked. Remember that Click Tracking only tells you what page was clicked on. This means that you need separate pages for your course. If you create a single, big Captivate Flash file, you will have no clickstream data. Instead you need to break the Captivate movies up (which is good practice anyhow) and put separate Captivate movies on each page.

LMS Tracking

I'm skipping Custom for a second. LMS tracking relies on creating a SCORM or AICC course which then communicates with the LMS in order to provide details of score, sections completed, which user it is, etc.

There are two issues with LMS tracking. First, many people do not have an LMS available to them. Second, even if you have an LMS you may not want to require users to login before they access content. This is discussed in Tools for On-Demand Information - An LMS?

Custom Tracking

While it is becoming less common as prices for LMS products have gone down and there are more hosted LMS products available, there are still times when we build custom tracking solutions. If you have no IT support available to you (i.e., simple programming), then this option is not available. However, there are some very simple things you can do to quickly and easily track your courses. While there are many solutions and lots of possible permutations, the basic approaches that are used are either a Simple Database or Enhanced Click Stream.

In a Simple Database approach, users will be asked to enter their name (sometimes at the start and sometimes at the end) in a simple form that will be recorded in a database. A simple web page is created that dumps out these results. There are lots and lots of subtleties here, but this is very simple to pull together and will give you a record of what a specific individual did. This approach is good when you only need a simple report of who has completed the content and do not need details of how they got there.

In Enhanced Clickstream, we will continue to rely on a tool like WebTrends, but we will put in place a simple bit of code that will enhance the clickstream data (the web log file) with information about the particular user. Normally, we rely on asking the user up-front for who they are (and then embed a cookie for repeat visits). This way, we can encode each page hit in the log file with the user information. WebTrends and other such tools can look at these parameters and give details of what pages that user has gone to. If you have a single completion page, it is easy to get a report on who has "completed" the course. This approach is good when you want to get more details of what individuals are doing on the system.



Keywords: eLearning Resources

US, Europe, China Vie for Cultural Influence

For years, the global marketplace has been dominated by American advertising, media  and products.  However, the Herman Group forecasts increased competition from Europe and China on the world stage, and the different cultural influences that such presences would bring:



Watching trends, we look beyond China to other parts of the world under the influence of Western cultures. American and European marketers are deeply invested in extending their reach and penetration. They must now anticipate and prepare for competition from Chinese marketers. There is a new player in the global race for cultural influence, the tremendous consumerism impact, and the more subtle pursuit of political attachment.




The Herman Group also anticipates a surprising level of diversity from the Chinese marketplace, reflecting the array of cultural niches and preferences that marketers in the West are just beginning to understand.

Tags: , , , , , ,



Tuesday, May 16, 2006

Developer Creates a Prototype "Zero Energy Home"

So-called "zero energy homes" have been around for awhile in experimental form.  But now, an Oklahoma-based housing developer has created a prototype that could bring such high-efficiency, environmentally friendly homes into the mainstream housing market.



The builder, Ideal Homes, constructed the house in a suburb of Edmond, Oklahoma.  The house is designed to sell for about $200,000 in a market where an equivalent house using more traditional energy methods would sell for about $125,000.  Ideal Homes is positioning itself to be a leader in new housing construction, perhaps hoping to redefine it the way that William Levitt did in the 1950s



Despite its name, a zero energy house does use energy, and is connected to the power grid.  But through solar cells, ground-source heat pumps, tankless water heaters, sophisticated insulation and architecture that leverages sunlight, a zero energy house can actually generate more energy than it consumes. 



The higher cost of the house may outweigh the energy savings in the short term, but the savings will surely increase as the price of fossil fuels rises.  Plus, if zero energy design becomes the standard for housing construction, costs are certain to fall, making zero energy homes that much more attractive.



Source:  Futurismic



Tags: , , , ,





Collaborative Learning Using Web 2.0 Tools - A Summary

Background

Over the past six weeks, I’ve been leading a course:

Collaborative Learning Using Web 2.0 / eLearning 2.0 Approaches

Course Description: The purpose of this course is to give you an opportunity to learn about collaborative learning by participating in collaborative learning. This course is designed to teach how to design and build collaborative learning experiences using Web 2.0 / eLearning 2.0 approaches.

You can find out more about the course itself from the Wiki at: http://collaborativelearning.pbwiki.com/

The basic structure of the course was:

  • Six weeks long, each week had a one-hour virtual classroom session
  • First two weeks were introductions to the tools and to collaborative learning
  • Middle three weeks were the design (as a team) of collaborative learning projects, facilitation/participation in projects.
  • Final week was a summary discussion

The participants in the class were corporate learning professionals from a variety of medium to large organizations.

Summary

This has been a great learning experience for me. I thought it might be interesting to provide some of the feedback from course participants and some of the insights from having conducted this course. I’m going to likely have additional posts based on the outcomes of this course. (Note: all quotes below are from participants.)

  • Nuvvo was initially going to be used for the course as a registration system, communications and some content presentation. However, I found little value over using separate tools such as Yahoo Groups and a PBWiki. Through Nuvvo I did receive several registration requests, so it at least offers free advertising.
  • There is significant interest in the topic. I sent out about 10 emails to friends and colleagues and quickly found about 20 people wanting to take the course (I got back 2 or 3 people at several companies). However, the low barrier to entry (free course offered by a friend) created an opportunity for a disconnect between my expectations for participants and their “commitment.”
“In order for the class to be effective I would be charging up front. Make some accountability for the folks to ensure the committment is there.”
  • An introductory survey, conduct using SurveyMonkey – , worked very well! I would highly recommend SurveyMonkey. Using the survey I was quickly able to determine interest areas, when people would be able to participate, level of understanding of different technologies (Survey Results). My only problem was that I didn’t really have a way to change the class design significantly based on the result and I should have. See the next topic. Interestingly, one of the projects within the class (created by the participants) used a survey as well (which was a surprise, but speaks to how easy surveys are to use these days).

“Use a more comprehensive survey to learn more about the participants and why
they are taking the course and how they plan on using the things they learn. This might provide more insight in developing various exercises or homework - pairing people with like needs.”

  • When I originally conceived the course, I assumed that most attendees would know about Blogs, Wikis, etc. generally, but would not have experience using them. This turned out not to be the case. Because of this I could easily have used another two weeks to more gradually introduce the tools.
"introducing 3 new technologies (blogs, BlogLines, wikis) at the same time is a bit much"
“Specific introductory directions would have been helpful.”
"liked to have started the week with more background and knowledge on blogs and wikis ... so my focus could have been on richer information sharing"
“Have a pre-class experience for those unfamiliar with the tools to learn more about
them by having demonstrations or showing examples. Or dedicate the first 2 sessions to learning and applying the tools one at a time in our homework. In this way, those people who are already familiar with them could be given the choice of missing those sessions. "
  • All professionals (including learning professionals) are extremely busy. Even though the attendees in this class were fairly dedicated, it still is hard to find time for 3-5 hours per week. Ideally the course would have required less research time on topics. It is hard to define “assignments” that take relatively short amounts of time yet are interesting and deep enough. Part of this is the natural reluctance to share partial thinking.
"Doing research on the web is time consuming"
"I find that I am hesitant to write anything unless I have really thought it through."
  • Firewalls and restrictive corporate environments caused us considerable grief: Yahoo Groups were restricted in some corporate environments, Elluminate did not work through several firewalls, we had to switch to WebEx and that really goofed up the start of the course; Yahoo Toolbar (for MyWeb) couldn’t be installed on locked desktops.
  • Software as a Service (SaaS) is only way this could have worked. While I came into this believing in SaaS, I’m leaving it believing even more.

  • Limit class size … teams of 4 worked well … 22 individuals acting individually in the early part of the course did not.

  • I think that we achieved a different kind of understanding around Blogs, Wikis, Discussion Groups, etc. by actively using them as part of the class. Most of the participants gained real value from actively working with the tools as part of a learning experience, BUT, there is definite frustration as well given having to simultaneously get up to speed on the tools and learn about using them. Given the widely different levels of experience and propensity for using the tools, it was very difficult to balance learning about the tools and using them as part of learning. In the future, I will separate the two entirely.

"Let’s hear it for failure based learning"

  • Students were given a first-hand experience relative to "control" in collaborative learning both as participants and as leaders. As participants, they were given a lot of autonomy which sometimes worked well and many times did not. As leaders of their individual projects, they had mixed results with the participants actively engaging on the project. Also, I was fairly open on many issues such as assigning roles in teams, establishing norms – but some teams suffer because of this:
"Team collaboration might have been better if we had assigned roles soon after Tuesday's meeting"
"I found that in our collaborative environment, it was much easier to get busy and not participate as much as I would have when forced to face my facilitator or team members in an actual conference call or live meeting encounter. It became apparent that deadlines would need to be set and enforced in a corporate collaborative environment to ensure things kept progressing."
  • You still need to enforce timelines, norms, etc. While it’s nice to try to leave things open, it may be more effective to be somewhat dictatorial.
“Meetings are great to keep things moving because they are a deadline”
  • Each week, students would reflect on what they learned and how the class was working for them through a Plus/Delta assignment. This worked very well and sparked interesting discussions each week. But you need a thick skin. Most of the quotes here comes from the Plus/Deltas. However, there was some question of the format:

"I am not sold on the value of a plus / delta post. I prefer a more free form comment on the learning of the week. The important element of the post from a learning perspective is learner reflection. I think general comments on the week
allows this reflection to be more meaningful, at least to me it is."

  • The tools generally work pretty well for collaboration
"Our new collaboration tools were essential as our team had challenges with time schedules and difficulties coming together."
"signing up on Yahoo 360 and Yahoo My Web went well"
"sharing links with My Web is a lot easier than e-mailing them"
"Yahoo Bookmarks allows you to see your bookmarks from home or office"
"Yahoo groups worked well for exchanging information with team members"
"Signing up for Yahoo 360 and My Web were very quick and easy"
"I was surprised at how easy add-ins were to incorporate"
  • The projects in the course (designed by participants) was a great learning experience. If anything, more of this kind of learning would have been better.

  • Timing is a real challenge. Ideally you would get work into pairs or teams quickly, but with this kind of class you expect to have some level of non-participation and you don’t want people teamed with folks who will drop. So, you start individually, but then move to more team-based. However, most of the participants felt that pairs/teams were more effective.

  • Our teams had mixed results in really collaborating virtually. This was primarily a function of timing. If you could do several cycles in a short amount of time, then you could get good collaboration.

Keywords: eLearning 2.0, Web 2.0, Collaborative Learning

Will 2006 be a Season of "Niche Blockbuster" Movies?

Those observing the film industry made note of the overall decline in box office sales in 2005.  This year, despite a rise in business, analysts are noting that this summer's slate of major releases does not appear to be generating much excitement among those in the prime movie-going age bracket (18-30 year olds). 



An online survey of young people by the consumer trend forecasting firm Zandl Group concerning the 2006 summer movies found that only three releases (The Da Vinci Code, Pirates of the Caribbean: Dead Man's Chest and X-Men: The Last Stand) generated interest from more than 50% of those surveyed.  Additionally, much has been made of the less-than-spectacular numbers earned thus far by the season's first big release, Mission Impossible III, and the anemic first-weekend earnings of Poseidon, the $160 million remake of the classic 1972 disaster movie The Poseidon Adventure.



The long tail effect, combined with greater online offerings, is creating the phenomenon of the "niche blockbuster."  Unlike the classic Hollywood blockbuster that "everyone" went to see, niche blockbusters can still earn respectable profits, but will appeal to specific audiences.  The 2006 menu of summer movies may well be one that appeals to just such niches.



Sources:  VisibilityPR, Box Office Mojo



Tags: ,



Content Reclaiming the Throne

"Content is king!" went the battle cry from the early days of the Web.  A decade later, with the dotcom bust a distant bad memory and Web 2.0 reinvigorating the online world, content is once again the focus of attention.  The attention of some very big and surprising players, it should be noted.



User-generated video has, of course, the hottest buzz at the moment.  This buzz has caught the eye of advertisers, eager to find new ways to attract viewers who ignore TV commercials and popup ads.  An Atlanta-based ad agency called ViTrue is developing a model for selling customer-created ads, to "enable major brands to leverage customer creativity to produce more relevant and engaging ads, at lower production and distribution costs." To do this, ViTrue has acquired a controlling interest in the video sharing site Sharkle.



The way ViTrue sees it, everyone wins:  the client gets a low-cost ad that raises its visibility in in a hard-to-reach demographic, consumers see something that's both informative and entertaining, and would-be video producers get their work showcased.  The strategy has its risks -- the viral video craze could die down, or viewers could get jaded quickly -- but with the market for online video advertising expected to reach $640 million in the US alone by 2007, it's a risk advertisers have to take.



Companies like ViTrue aren't the only ones embracing content these days.  No less a player than Microsoft has announced its entry into the content creation space.  While hardly a stranger in this area -- Microsoft launched Slate and MSNBC back in the Web 1.0 days, and Bill Gates said in early 1996, "Content is where I expect much of the real money will be made on the Internet" -- Microsoft appears to be once again making content development and advertising key priorities.  Said CEO Steve Ballmer in a May 4 meeting with advertisers, "We think the desire for people to read online and the desire you have to communicate online actually exceeds the good inventory that's available out there today." 



Microsoft's approach appears to center around beefing up its MSN network by, among other things, acquiring UK-based game developer Lionhead Studios and online advertising firm Massive Inc.  Some speculate that Microsoft may also purchase a stake in some of the Internet TV startups, or even in an established media powerhouse like Time Warner.



Sources:  Lost Remote, Springwise, Business Week



Tags: , , , ,



The Changing Face of Grocery Shopping

Evolving consumer tastes and business trends are changing the way we shop for groceries.  Once a dull, utilitarian task that stressed low prices over style, grocery shopping is taking on new and even exciting facets.



In the US, perhaps the most notable trend is that toward upscale stores such as Wegman's, Whole Foods and Trader Joe's.  In addition to providing the customer with innovative and exotic products -- many of which are organic and ecologically sensitive -- these stores strive to create a shopping experience, offering cooking classes and other attractions to make the stores special destinations.  Many of these stores also have cafes and eat-in facilities, allowing shoppers to dine where they shop (serving a dual purpose of offering a time-saving service while showcasing the store's foods).  Such moves appear to be paying off, as shoppers often travel out of their way to shop at these stores.



In these stores, enhancements to the customer experience is more than just window dressing.  These stores consider themselves to be on a mission... and the result is that their employees are energized, and pass that sense of purpose on to their customers.  Attention to detail, then, is an outgrowth of employee passion.



The trend toward upgrading supermarkets is being noticed in Europe, as the Austrian chain MPreis is incorporating stunning and progressive architecture in its store construction.  With each store featuring its own eye-catching design, MPreis stores are as far away as one can get from bland "big box" retail construction. 



One notable aspect of this trend is that, with all this attention to style and mission, prices in these stores are only slightly higher than that of their discount competitors -- who are taking notice of these stores' success. 



The most notable of these, Wal-Mart, is preparing to launch a line of organic foods.  Organic advocates generally applaud this move, as Wal-Mart's power in the retail space can only raise organic products' profile in the mainstream, though some critics dismiss the move as "greenwashing," or using organics as a mere marketing ploy while changing little about their retail operations.



Upscale grocers are thriving because informed consumers are voting with their wallets, choosing style, service, quality and sustainability over low prices in their grocery shopping, even if that means paying slightly more. 



Sources:  Springwise, WorldChanging



Tags: , , , , ,