Thursday, December 30, 2010

Skype for iPhone Now with Video Calls

I could remember back to 1964 at the New York's World Fair being transfixed as I spoke to my mother an father and was able to see them on the video phone in the ATT Pavilion. It sure has taken a long time for the idea of video calls to take hold. But if there is one prediction which will really take hold in 2011 it will be how we begin to make more and more video calls on our mobile phones. Apple lead the way this year with their  application FaceTime running on the iPhone 4 and iPod Touch, allowing users to have video conferencing calls on the go when they are within earshot of a Wifi network. Skype today announced the release of Skype for iPhone which is free that will allow users to place video calls over a 3G or Wifi network.

With Skype, you can:
  • Make video calls to people on their computers as well as other iPhones (details below)
  • Make free audio calls to anyone else on Skype
  • Make great value calls to landlines and mobiles around the world
The new app is compatible with the iPhone 4, iPhone 3GS, and iPod touch 4th generation with i0S 4.0 or above. You can also receive video calls on the iPod touch 3rd generation and iPad. Calls can be made between devices using the new Skype for iPhone app and desktops including Skype for Windows 4.2 and above, Skype for Mac 2.8 and above, Skype for Linux and the ASUS videophone.

So enjoy the New Year and reach out and touch someone with Skype for the iPhone!

Sunday, December 26, 2010

Documenting Student's Work with Xpaper

I have always been a big fan of the digital pen technology and have witnessed the growth of this market over the past 8 years with the release of such products as the Livescribe Pulse Smartpen and PaperShow for Teachers. Using a digital pen and digital paper makes the solutions come alive and are a ,natural for students and teachers to use since very little training is required. One of the solutions which I believe could have tremendous impact on how teachers and school administrators capture data in the schools could come about by using digital pen and paper solutions.

I have been using a product called Xpaper from a company called Talario, LLC for the past couple of years, which lets me print my documents or forms on ordinary plain paper from a color laser printer. Once the document is printed with Xpaper, it lays down a grid of microdots on the page which makes it ready to write on with your digital pen. In the example above, I used Xpaper to record the errors a student made while reading a text passage. Using the Logitech digital pen I marked up the reading passage and once I was done I docked the pen and a crystal clear PDF file was created for me to archive the document. Using Xpaper, I eliminated the need to scan the document into my computer and I was quickly able to create a workflow with the digital document. Using Xpaper, I can quickly send the PDF document in the cloud and store it on Google Docs if I prefer. Now imagine if teachers and schools administrators used this technology to process all of the forms and data collected in the schools? I think you will find that using this technology one could save time and be able to begin to manage, collect, and archive data that is important to the life of the school. If you are interested in learning more about Xpaper and how you could take advantage of this technology, please feel free to email me. To watch an overview of how Xpaper works click here.

Friday, December 24, 2010

aHa!Visual Web Export for MindManager 9

I find myself spending more times these days creating and putting up more  information on the web for both the classes the I teach and the workshops that I facilitate. As a result, I am always looking for tools that will make it easier for me to accomplish these goals. I spend a lot of time using various mind mapping tools to brainstorm and communicate the ideas and information that I will cover in my sessions. I tend to use MindManager 9 now to create a lot of my materials and have been looking for a way to quickly be able to output  my maps on the web. Several years ago I had reviewed aHa!Visual Web Export and found it to be an easy to use solution for moving my MindManager maps to the web. Since moving to MindManager 9 , I had a chance to take a look at the aHa!Visual Web Export plug-in which was recently updated to work with the latest version of Mindjet MindManager version 9.

Installing aHa!Visual Web Export was extremely simple process. To export your MindManager 9 mind map to the web you Select from the File Menu Export as Web Pages and then you will see an option to Export using the aHa!Visual Web Export plug in. While there are lots of ways to customize how the map will render on the web page- it is easy enough to select the default options and save your outputted files to a folder on your desktop. Once you save the outputted files you can simply upload them to your server to display them. I found the process very easy to do and within minutes I was able to view my map on the web. All of your notes and web page links are live when they are exported. You can click on this link to view the output from aHa!Visual Web Export. I would highly recommend aHa!Visual Web Export if you are looking for fast way to share your MindManager 9 maps to the web.

Thursday, December 16, 2010

Rebit’s Holiday Specials Bring “Ridiculously Simple” Discounts on PC Backup

Rebit has to be one of the easiest ways to back up your PC. I have had the opportunity to use Rebit on my laptop and it works as advertised. Rebit not only backs up your files but it also backups your applications and system. This is a great gift for the holidays! Brian


Rebit’s Holiday Specials Bring “Ridiculously Simple” Discounts on PC Backup
All Software, 1TB Drive on Sale through New Year’s Eve

LONGMONT, CO –  December 16, 2010 – Rebit Inc., the company dedicated to making backup and recovery for PC users “ridiculously simple,” today announced that it is offering holiday specials through December 31, 2010 on orders placed through www.Rebit.com.

“Our online holiday specials are a great way for users to save on complete and automatic backup and recovery for themselves and their friends, so all of those holiday memories can be safely stored and  treasured for years to come,” commented Charlene Murphy, executive vice president, sales and marketing, Rebit. 

About Rebit’s “Ridiculously Simple” Software
Rebit is the only backup and complete system recovery solution that starts working the minute it is installed, keeping PCs continuously protected from crashes, viruses or accidental file deletions.  Rebit backup and recovery is available for both Direct Attached Storage (DAS) and Network Attached Storage (NAS), and can be purchased at www.Rebit.comU.S. computer retailers and resellers can purchase through Rebit authorized distributors D&H (www.dandh.com), and SED (www.sedonline.com).

About Rebit Inc.

Rebit Inc. is a software company committed to delivering fully-automatic and complete PC backup and recovery, removing the burden of managing backup from users.  Rebit was named a 2009 and 2010 CRN Emerging Vendor by Computer Reseller News, and Rebit products have earned the Editor’s Choice Awards from Computer Times and Dragon Steel Mods. Contact Rebit at www.Rebit.com.  Rebit recommends “following the frog” via Twitter (@Rebit_Inc), Facebook (www.Facebook.com/Rebit) and the Frog Blog (www.Rebit.com).


New French Blog / Novo blog em Francês

This article is written in English and Portuguese
Este artigo está escrito em Inglês e Português

English Version:

It's a great pleasure to present you another Informix blog. This one is written in French. The author is Eric Vercelletto, which was a colleague at Informix Portugal. Eric has a long history with Informix. We was working at Informix France and decided to join Informix Portugal mainly to participate in a big and complex project several years ago (before I joined Informix). After that we met and worked together on another customer. At the time I was working mainly with tools and he managed all the engine side stuff. When he decided to embrace other challenges outside Informix, I assumed his position at that customer. It was a big challenge for me (I had relatively low experience with the engine) and Eric was a great help. I still use some of his scripts today, and I learned many things with him.
But the world never stops spinning and currently Eric is back on Informix, and he's enthusiastic about it. I wish him all the best and I really hope he is able to share some of his knowledge about Informix with the community.
He decided to write the blog in French since French people like to take care of their language. This is great news for the French community. As for us, non French speaking people we can try our best to understand it. It would be interesting to see it in English also... (just a challenge Eric ;) ). But for now, the important it to keep a steady rate of articles. And I can assure you it's hard. Welcome Eric!

The blog address is:

http://levillageinformix.blogspot.com/

(something like "the Informix village")



Versão Portuguesa:

É um grande prazer poder apresentar-vos um novo blog Informix. Desta feita escrito em Francês. O autor é Eric Vercelletto, que foi um colega da Informix Portugal. O Eric tem um longo passado com Informix. Estava a trabalhar na Informix França e decidiu juntar-se à Informix Portugal, pricipalmente para participar num projecto grande e complexo há vários anos atrás (antes de eu ingressar na Informix Portugal). Após isso conhecemo-nos e trabalhámos juntos num outro cliente. Na altura eu trabalhava essencialmente com ferramentas e ele geria o lado do motor.
Quando ele decidiu abraçar outros desafios fora da Informix, assumi a sua posição no cliente. Foi um grande desafio para mim (tinha muito pouca experiência com o motor) e o Eric foi uma grande ajuda. Ainda utilizo alguns dos seus scripts hoje, e aprendi muitas coisas com ele.
Mas o mundo dá voltas e mais voltas e actualmente o Eric está de volta ao Informix, e continua entusiasta. Desejo-lhe tudo de bom e espero sinceramente que ele consiga partilhar algum do seu conhecimento Informix com a comunidade.
Ele decidiu escrever o blog em Francês porque os Franceses gostam de cuidar da sua língua. Isto são excelentes notícias para a comunidade Francófona. Quanto a nós, que não dominamos a língua, tentaremos o nosso melhor para o perceber. Era interessante ver o conteúdo também em Inglês (só um desafio Eric... :) ). Mas por agora, o importante é manter um ritmo constante de novos artigos. E posso assegurar que não é fácl. Bem vindo Eric!


O endereço do blog é:

http://levillageinformix.blogspot.com/

(algo como "a aldeia do Informix", o que vindo da Gália, trás boas recordações de criança)

Wednesday, December 15, 2010

Informix ROI webcast

This article is written in English and Portuguese
Este artigo está escrito em Português e Inglês

English version:

Following the recent announcement of a Forrester study about Informix ROI, a webcast was held on December 13. The replay can be seen here:

https://www.techwebonlineevents.com/ars/eventregistration.do?mode=eventreg&F=1002717&K=4ON

You can listen to it in webcast format and also download the slides and sound file.
The presentation was done by Jon Erickson from Forrester and Richard Wozniak who browses through some of the Panther key features.
Be sure to pass this to your company management!



Versão Portuguesa:

No seguimento do recente anúncio sobre um estudo da Forrester sobre o ROI (return on investment) do Informix, foi apresentado um webcast no dia 13 de Dezembro. Pode rever esta apresentação aqui:

https://www.techwebonlineevents.com/ars/eventregistration.do?mode=eventreg&F=1002717&K=4ON

Pode ouvir/ver em formato webcast e também fazer o download dos ficheiros com os slides e o som.
A apresentação foi feita por Jon Erickson da Forrester e Richard Wozniak que abordou algumas das principais funcionalidades da versão Panther (11.7)
Não deixe de divulgar esta informação aos gestores da sua organização!

Tuesday, December 14, 2010

Seven Things I Learned This Year

Over the past few years, I spend part of December going back through my blog to recap a bit of what some of the key things I’ve learned over the course of the year.  I’ve been doing this the past few years, for example: Learned about Learning in 2009.  And every year I use this as a Big Question – see: Learning 2010.  A lot of it is thinking through where my thinking has changed over the course of the year.  So here are a few of the things that are a bit different for me.

1. Twitter is Much Better than I Thought for Learning

I used to say during presentations that I wasn’t quite sure about twitter as a learning tool.  During 2010, I’ve been ramping up my use of twitter as a learning tool.  I’ve had to find ways to filter the flow and figure out when/how to reach out.  It was definitely helpful to spend time going through Twitter for Learning – 55 Great Articles.

2. Learning Coach Model Very Powerful

In 2010, I had a great experience where Dr. Joel Harband wrote a series of articles for my blog on Text-to-Speech in eLearning.   Here’s the series:

But what I learned from this was that it was a fantastic way to learn about a topic where I was interested but didn’t have the time to spend researching it.  Instead, Joel would write it up.  I’d ask questions and edit it.

It provided high value for me and hopefully value for people reading it.

I’m looking forward to doing more of this going forward.  Please let me know if you want to be a Learning Coach for me on another topic.

3. iPad (and iPhone) are Much More Useful Than I Expected

I didn’t actually think that I would care about the iPad except as a tool for training and performance support in environments like retail and restaurants where it’s always been an issue having access to machines.  However, now that I have an iPad myself, I’ve found myself sitting on the couch with it a LOT.  And slowly it’s got me to try more applications and then those applications expand off to my iPhone.

It’s an amazing device and no surprise it was one of the breakout topics on eLearning Learning this year.

4. LMS and Learning Tracking Still Struggling

While LMS solutions continue to get better, more powerful, more diverse, I continue to find myself searching for just the right solution for particular needs.  For example my search for an LMS Solution for Simple Partner Compliance Training didn’t really arrive at just the right solution.  I was also struggling for clients who needed very simple learning tracking but with some customizations.  Marketplace LMS solutions don’t quite fit.  Neither do more complex solutions.

And a big part of the problem is just how many there are and how fast they change.

5. Aggregation and Social Filtering Provide High Value

eLearning Learning has somewhat become my singular source of great eLearning content.  I use it to filter and find all the best content on a daily, weekly, monthly basis.  And it’s going to become much better in the new year as it moves over to the next generation platform.  I was really glad to see it grow to become one of the Top eLearning Sites.  And the system itself is growing with sites like Social Media Informer

6. Open Content Potential But There are Challenges

This year I spent quite a bit of time looking at where and how open content could get leveraged in different ways.  I’m still struggling a little bit with it, but I know there’s going to be a lot going on around it.  See Failure of Creative Commons Licenses and Creative Commons Use in For-Profit Company eLearning? for more on this.

7. Flash may Die and HTML 5 is Going to be Big

2010 opened my eyes are Flash and HTML 5.  I really think that 2010 marks the Beginning of Long Slow Death of Flash.  This, of course, means some really big changes for authoring tools in the industry.

Top Topics and Posts

As part of this exercise, I went back to look at my top posts and hottest topics for the year via eLearning Learning.  What I wrote more about in 2010 than past years:

And here were my top posts based on social signals.

  1. Twitter for Learning – 55 Great Articles
  2. Wikis and Learning – 60 Resources
  3. Teaching Online Courses – 60 Great Resources
  4. Top 10 eLearning Predictions for 2010
  5. Top 35 Articles on eLearning Strategy
  6. Open Source eLearning Tools
  7. 19 Tips for Effective Online Conferences
  8. Effective Web Conferences – 41 Resources
  9. Augmented Reality for Learning
  10. eLearning Conferences 2011
  11. Creative Commons Use in For-Profit Company eLearning?
  12. Top eLearning Sites?
  13. Social Learning Tools Should Not be Separate from Enterprise 2.0
  14. Social Media for Knowledge Workers
  15. Low-Cost Test and Quiz Tool Comparison
  16. Using Text-to-Speech in an eLearning Course
  17. Text-to-Speech Overview and NLP Quality
  18. SharePoint Social Learning Experience
  19. Beginning of Long Slow Death of Flash
  20. Text-to-Speech vs Human Narration for eLearning
  21. eLearning Innovation 2010 – Top 30
  22. Future of Virtual 3D Environments for Learning
  23. Failure of Creative Commons Licenses
  24. Text-to-Speech eLearning Tools - Integrated Products
  25. Success Formula for Discussion Forums in Financial Services
  26. Ning Alternatives that Require Little to No Work?
  27. Performance Support in 2015
  28. What Makes an LMS Easy to Use?
  29. Selling Social Learning – Be a Jack
  30. Evaluating Knowledge Workers
  31. Learning Flash
  32. LMS Solution for Simple Partner Compliance Training
  33. Filtering, Crowdsourcing and Information Overload
  34. Best Lecture
  35. Text-to-Speech Examples
  36. Sales eLearning – 21 Great Resources
  37. Simulations Games Social and Trends
  38. SharePoint Templates for Academic Departments
  39. Virtual Presentation – Ten eLearning Predictions for 2010
  40. Information Filtering

Sunday, December 12, 2010

Panther: Name service cache

This article is written in English and Portuguese
Este artigo está escrito em Inglês e Português

English version:

A recent thread in the IIUG mailing list (relative to a reverse DNS issue) reminded me of a new Panther (version 11.7) functionality that was on my list for articles. I've been avoiding many of the bigger and more important features because they will take a lot of time to write about... I hope this one will be shorter.

Informix needs to read several files or interact with DNS servers each time you try to open a connection. Considering Unix and Linux (Windows is a bit different technically, but not that much conceptually), these are some of the actions the engine must do:
  1. Depending on your host resolution criteria it will probably open the /etc/hosts file to search for your client's IP address. If it's not there it will contact your DNS server in order to request the name associated with the IP address.
    Note that all this is done by a system call.
  2. It will access /etc/passwd (or equivalent) to get your user details (HOME dir, password - this is probably stored in another file like /etc/shadow - , user id, group id etc.)
The engine must also access /etc/services and /etc/group in other situations.
Depending on your environment these activities can take a bit of time, and require some significant CPU usage. There are systems with high number of connections per second which can naturally transform this into a significant issue.
To give you an example I do regular work on a system that used to receive a very large number of requests from CGI programs. So, each request received by HTTP required a new process on the application server, and a new connection on the database server. They had peaks of tens of requests per second. Currently they're using Fast CGI with noticeable improvements.
Anyway, IBM decided to give us the chance to optimize this by caching previous results (file searches and DNS requests). This is done with the new parameter called NS_CACHE (from Name Service Cache). The format of this $ONCONFIG parameter is:

host=num_secs,services=num_secs,user=num_secs,group=num_secs

Each comma separated pair (functionality=num_secs) configures the number of seconds that the engine will cache a query result for that functionality. I'm calling it functionality, because it can be implemented through files or system APIs. The documentation could be clearer, but let's check each one:
  • host
    This is the host and IP address resolution service. Depending on your system configuration (on most Unixes and Linux this is specified in /etc/nsswitch.conf) it can be resolved by reading the /etc/hosts file and/or making a request to your DNS servers
  • service
    This should be the map between service names and system ports, usually done by reading /etc/services. The only situation that comes to my mind where this is used is when you're trying to start a listener (either during engine startup or after that with onmode -P) or when you're trying to make a distributed query to another engine, and you use names in your INFORMIXSQLHOSTS instead of port numbers. In any case, I may be missing something...
  • user
    This is very important. It refers to all the user related info that Informix gathers from the OS and that is relevant to Informix. The information can be stored in /etc/passwd, /etc/shadow, or indirectly be managed by external services like LDAP. It can include:
    - Home dir
    - User ID
    - Group ID
    - Password
    - User status (enable or disable
  • group
    This relates to the OS group information. Usually done by reading /etc/group
If the specified number of seconds to cache the information is zero, it means that we don't want to cache it. So the behavior will be the old engine behavior (for each relevant request, the information must be obtained).
The parameter can be changed online with onmode -wm

It's important that you fully understand the implications of caching these kind of information. By asking Informix to cache this info, we're also assuming the risk of working with stale information. Let's imagine a simple scenario. Assume this sequence of events:
  1. At time T0 you connect using user and password to the engine which is setup to cache user information for 600s (10 minutes).
  2. At time T1 you change that user password
  3. At time T2, the same user tries to connect to the Informix database with the new password. It will fail!
  4. At time T3 (T0 + the number of seconds to cache user information) the user repeats the connection attempt with the new password. It will succeed!
How can you avoid situation 3? If you change the cache timeout to 0, it will work as a flush.
If for example you do some changes to your user's information you can run:


onmode -wm NS_CACHE="host=900,service=900,user=0,group=900"
onmode -wm NS_CACHE="host=900,service=900,user=900,group=900"


These commands will flush the user information cache, and then reactivate it.

So, the point I'd like to make is that this feature can help you improve performance (specially for systems with an high connection rate), but it can have some side effects. You can workaround these ones, but for that you must know they exist.


Versão Portuguesa:

Uma discussão recente na lista de correio do IIUG (relativa a um problema com reverse DNS) lembrou-me de uma funcionalidade nova do Panther (versão 11.7) que estava na minha lista de temas a abordar. Tenho andando a evitar muitas das maiores e mais importantes novidades porque vou demorar bastante tempo a escrever sobre elas.... Espero que esta seja mais reduzida.

O Informix tem de ler diversos ficheiros ou interagir com servidores de nomes (DNS) cada vez que abre uma conexão. Considerando o Unix e Linux (em Windows será um pouco diferente tecnicamente, mas não muito conceptualmente), estas são as acções que o motor tem de fazer durante o estabelecimento de uma conexão:

  1. Dependendo do critério usado para resolver endereços e nomes, provavelmente irá abrir o ficheiro /etc/hosts para procurar o IP da conexão. Se não o encontrar irá provavelmente contactar o servidor de nomes (DNS) e pedir o nome associado ao IP de onde chega a conexão.
    Note-se que isto é feito com uma chamada de sistema e não cabe ao Informix definir os critérios.
  2. Irá aceder ao /etc/passwd (ou equivalente) para obter os dados do utilizador (HOME dir, password - isto deve estar guardado noutro ficheiros como o /etc/shadow- , id de utilizador, id de grupo etc.)
O motor também tem de aceder ao /etc/services e /etc/group noutras situações.
Dependendo do seu ambiente estas operações podem demorar um pouco e requerer um consumo de CPU relevante. Existem sistemas com muitas conexões novas por segundo o que naturalmente pode transformar isto num problema sério.
Para dar um exemplo, trabalho regularmente com um sistema que em dada altura recebia um enorme número de pedidos por CGI. Sendo CGI, cada pedido recebido via HTTP requeria um novo processo na máquina do servidor aplicacional, e uma nova conexão na base de dados. Tinham picos de dezenas de ligações por segundo. Actualmente estão a usar Fast CGIs com benefício notórios.
De qualquer forma a IBM decidiu dar aos utilizadores a oportunidade de optimizarem estes aspectos, através de uma cache que guarda respostas anteriores (pesquisas em ficheiros e resultados de DNS). Isto é feito com um novo parâmetro designado NS_CACHE (de Name Service Cache). O formato do parâmtro do $ONCONFIG é:

host=num_segs,services=num_segs,user=num_segs,group=num_segs

Cada par (funcionalidade=num_segs) separado por vírgula, configura o número de segundos durante os quais o motor irá manter em cache o resultado de uma pesquisa para essa funcionalidade. Estou a chamar-lhe "funcionalidade", porque pode ser implementada usando ficheiros ou APIs de sistema. A documentação deveria ser mais clara, mas vamos ver cada uma:
  • host
    O serviço de resolução de nomes e endereços IP. Conforme a configuração do seu sistema (na maioria dos Unixes e Linux isto é definido em /etc/nsswitch.conf) pode ser resolvido pelo ficheiro /etc/hosts ou fazendo um pedido aos servidores de DNS
  • service
    Este é o mapeamento entre o nome de serviços e as portas de sistema, habitualmente feito através da leitura do ficheiro /etc/services. As únicas situações que me ocorrem em que isto é usado é quando arrancamos com um listener (seja no arranque do motor ou depois quando se usa o onmode -P), ou quando tentamos executar uma query distríbuida a outro motor, e usamos nomes no nosso INFORMIXSQLHOSTS em vez de números de portos. Mas pode estar a escapar-me alguma coisa, e haver outras...
  • user
    Este é muito importante. Refere-se a toda a informação relativa aos utilizadores que o Informix obtém do sistema operativo e que é relevante para o Informix. A informação é guardada no /etc/passwd e /etc/shadow, ou gerida indirectamente em serviços externos como LDAP. Pode incluir:
    - Home dir
    - ID de utilizador
    - ID de grupo
    - Palavra passe
    - Estado do utilizador (activo, inactivo)
  • group
    Isto diz respeito à informação de grupos do sistema operativo. Normalmente feito por consulta ao ficheiro /etc/group

Se o número de segundos especificado para a cache for zero, significa que não queremos fazer caching. Portanto o comportamento será o antigo do motor (para cada pedido a informação tem de ser obtida).

O parâmetro pode ser modificado online com o comando onmode -wm

É importante que entenda completamente todas as implicações de fazer caching deste tipo de informação. Ao pedir ao Informix que guarde e reutilize a informação já obtida, estamos também a assumir o risco de trabalhar com informação entretanto desactualizada. Vamos imaginar um cenário simples. Consideremos a seguinte sequência de eventos:

  1. No momento T0 conectamo-nos usando um utilizador e palavra chave a um motor configurado para efectuar caching por 600 segundos (10 minutos).
  2. No momento T1 mudamos a palavra chave desse mesmo utilizador.
  3. No momento T2 o mesmo utilizador tenta conectar-se ao Informix usando a nova palavra chave. Vai falhar!
  4. No momento T3 (T0+ o número de segundos configurado para a cache de utilizador) o utilizador repete a tentativa de acesso com a nova palavra chave. Vai ter sucesso!
Como pode evitar a situação do ponto 3? Se mudar o tempo de cache para 0, funciona como uma limpeza da cache.
Se por exemplo efectuar mudanças na informação dos utilizadores, pode executar:


onmode -wm NS_CACHE="host=900,service=900,user=0,group=900"
onmode -wm NS_CACHE="host=900,service=900,user=900,group=900"


Estes comandos fazem a limpeza da informação e depois re-activam a cache.

Portanto, o ponto que gostaria de frisar é que esta funcionalidade pode melhorar o desempenho (especialmente em sistemas com elevada frequência de novas conexões), mas também pode ter efeitos secundários. Estes podem ser contornados, mas para isso temos de saber que existem.

Saturday, December 11, 2010

Creating Language Arts Lessons with PaperShow for Teachers

As more and more teachers have begun to use PaperShow for Teachers in the classroom I wanted to share this tip for creating quick grammar and cloze technique activities. Using the interactive paper that comes with PaperShow for Teachers gives you the freedom to create activities for the classroom that your students can interact with. Once you create the activity and print it out on the interactive paper you can then pass the activity out to your students and have them complete it from their desks, so that everyone can see. Likewise you could use it to model how to complete the activity so that everyone can see how it is done. So lets get started!

I have found that the trick to creating these activities is using PowerPoint. So open PowerPoint and create one slide for each of your activities. In the screen shot below you will see that I created a slide that the students could use to correct the grammar.



It is probably a good idea to select a simple PowerPoint style that has a white background for two reasons: one, this will use less ink and two, it will be easier for your student to see the text when it is printed. Likewise, you will want to select a larger font so that it will be easier for your students to write on once it is printed on the interactive paper. For the second activity I created a cloze activity from the first stave of A Christmas Carol. I simply pasted the text into my PowerPoint slide and then removed the text and used an underscore to create the gap. For each of the words, I cut out I then pasted them on the bottom of the slide.

Now that my activity is completed I can save my PowerPoint file and import the slide deck into the PaperShow for Teachers application that is on the USB key and print it on the PaperShow for Teachers interactive paper. PaperShow for Teachers will prompt you print the slides on the interactive paper so make sure that you have the paper loaded in your color printer before clicking the OK button. Just a tip it is good idea to place the Printer Sticker on the printer that you will be using with PaperShow for Teachers that you received when you purchased the PaperShow for Teachers Starter Kit to help remind you how to orient the interactive  paper.

Here is a quick screencast of how to create import your activity into the PaperShow for Teachers application.
Click on this link to see a video on how to import your PowerPoint Slides into PaperShow for Teachers

Saturday, December 4, 2010

Sharing Ideas with PaperShow for Teachers in the Classroom

As much as interactive whiteboards pervade the classroom landscape it is amazing what you can do with PaperShow for Teachers to get students involved and engaged in the classroom instruction. PaperShow for Teachers provides a great vehicle for students to actively record their ideas as apart of a classroom discussion. PaperShow for Teachers has a 30 foot range from the computer which is ideal for walking around the classroom and having students make contributions to the lesson. Students will feel right at home using the digital pen and paper notebook to capture their ideas that their classmates can then see. Teachers will find it easy to pass the notebook and pen around the classroom and give students the opportunity to contribute to the classroom discussion. Once students have made their contribution, the notebook file can be saved in a PDF format, emailed or placed on the schools website to share.

PaperShow for Teachers makes for a great tool when students are brainstorming or story-boarding ideas. Consider setting up a learning center with a laptop, a LCD projector and PaperShow for Teachers. Students can then use PaperShow for Teachers to mind map their ideas or brainstorm while working in a small group. There is no need to complete the entire session during one class- just save the the file you are working on the PaperShow for Teachers USB key and you can always revisit it, the next time the class meets.

While many PaperShow for Teachers users-use it to deliver more engaging presentations, PaperShow for Teachers is an ideal tool for actively engaging students when they need to analyze or annotate images, diagrams or pictures. Using the PaperShow for Teachers application that resides on the USB key you can quickly bring in a series of images- print them using a color printer on the 8 1/2 x11 interactive paper and be ready to use them with the digital pen to mark up. Simply place the images you would like to use in a folder and PaperShow for Teachers will dutifully import them and get them ready to print on the interactive paper.Once the images are printed on the paper you can use all of the PaperShow for Teachers tools to annotate and mark up the images. This feature gives teachers the flexibility to bring in whatever pictures they need for a particular lesson. Once you do one time you will see just how easy it is do accomplish and how much fun it is to use in the classroom. So make you lessons more interactive by printing images with PaperShow for Teachers and then pass the binder with the images around the classroom to engage the students. Using images, students can identify certain features on the picture, comment about the picture, add additional ideas and then save them if it so desired.

At just $249 dollars, PaperShow for Teachers gives you tremendous value and a way to engage students in the classroom by providing them more opportunities to participate and interact with ideas and images. Contact me if you would like a free demonstration of PaperShow for Teachers.

Tuesday, November 30, 2010

Best Lecture

I just read George Siemens post Will online lectures destroy universities?  He makes the point that despite articles like Why free online lectures will destroy universities – unless they get their act together fast:

Statements like “universities are obsolete” or “universities are dying” are comical. And untrue. Universities are continuing to grow in enrolment and general influence in society. Calling universities obsolete while we are early on in the so-called knowledge economy is like declaring factories obsolete in the 18th century just as the industrial revolution was taking hold. Utter nonsense.

While George does talk about challenges in education, I think he misses part of the point of the article.  And this is something that I’ve been thinking (and writing – see Physics Lectures) about for a long time.  Here’s the point:

  • It’s incredibly easy to capture and distribute lectures.
  • Rather than getting a lecture from whoever is teaching your course locally, wouldn’t it be better to get a world-class lecture.

As the article points out:

At the same time, millions of learners around the world are watching world-class lectures online about every subject imaginable, from fractional reserve banking to moral philosophy to pharmacology, supplied by Harvard, MIT, and The Open University.

Have you seen Planet Earth or watched a Professor Lewin physics presentation?  There’s basically no way to compete with those sources.  And wouldn’t it be better to get the best available lecture with local discussion, studying, testing, etc.?

I’m not quite sure that I buy the article’s contention that:

The simple fact is that university lectures never worked that well in the first place – it’s just that for centuries, we didn’t have any better option for transmitting information. In fact, the success of top universities, both now and historically, is in spite of lectures, not because of it.

Maybe that’s because I’ve learned a lot in schools that way.  But even if you keep lectures, but you open up everyone to the Best Lecture available on a given topic … The implications here for education are profound. 

Best Lectures in Corporate Training

I also believe the implications here are profound for corporate training.  We can continue to hide behind the myth that our content is special and different.  Some of the time, that’s quite true.  But there’s a lot of content (leadership, management, safety, etc.) that really should not be replicated by every organization.

Instead, we should be looking for the Best Lecture and work our specifics around that.  Of course, that’s sometimes made harder because despite the Open Content movement in education, there’s less of a movement in corporate learning (and some barriers: Open Content in Workplace Learning?, Creative Commons Use in For-Profit Company eLearning?).

What’s also interesting about this situation is that there are similar barriers from the content creators standpoint.  In the Business of Learning, I talk about the challenges as a content creator and how the business models might work.  And every day, I’m talking with people who have great content and could be creating the Best Lecture on a topic.  And while they can easily capture it, getting it distributed in a way that pays is difficult.  Instead, they need to package it into a unit that is self-contained.  They need something similar to an LMS or a course that runs in a corporate LMS.  But it certainly won’t look like a Best Lecture model with corporate eLearning professionals being able to act like local discussion, studying, testing.

I’m not sure what any of this will look like in education or in corporate training – but I am sure it will be quite different in 20 years from how we do it today.

Monday, November 29, 2010

New and Improved Adobe Acrobat X

Adobe Acrobat remains one of the best kept secrets in the software industry. While many users, use the ubiquitous Adobe Acrobat Reader to open, print and display files on the web- many are unaware of the engine that makes this all happen. I have been fortunate enough to be provided with a Reviewer Copy of Adobe Acrobat X Pro which was recently released into the marketplace. As a long time user of Adobe Acrobat I was looking forward to working with the latest version of Acrobat to see what new and innovative features were build into this version. Adobe Acrobat X is now available for both Windows and Macintosh computers and builds on the long tradition of Acrobat as easy to use tool to create and publish PDF files.

The most significant change that you will see when you start up Adobe Acrobat X is in the interface. If you have used any of Adobe's newer applications you will feel right at home. Adobe has really done their homework and analyzed how users are most likely to use Acrobat and reconfigured the menus. You will notice on the right had side of the screen three different tabs, Tools, Comments, and Share. Clicking on any of these items will reveal a Pane with the associated tools and features. Being a long time user of Acrobat it was always a challenge to find where I might find the tool that I was looking for. Having the new interface now makes it a cinch to know exactly where to find something. The new interface is very intuitive and makes it easy for you to be as efficient as possible when you are looking for the right tool. The simplicity of the interface is going to be a hallmark of this version and one that I know I will enjoy using.

Creating PDF files is a lot easier to create with the new version of Adobe Acrobat X. Simply select the Create button from the menu and you have your choice of how you would like to create your PDF. One of the areas that has been vastly improved is creating a PDF from Scanner. I found that Adobe Acrobat X was much faster at creating the PDF and the finished PDF file size was much smaller than in the past. There were significant improvements in the Optical Character Recognition Engine which would account for better recognition of scanned material. Having a fully search-able PDF document with a small footprint really foots the bill for me.

One of my favorite features which was introduced in version 9 of Acrobat is the concept of a PDF Portfolio. This is an extremely powerful tool  and one that I feel has the potential to take this product far both in business and education sectors. As the term would implies a PDF Portfolio is a way for you to include a range of different types of  files and media formats and wrap it up in a PDF envelope. With a PDF Portfolio one could include a Word, Excel, Audio, Video, PDF documents and convert it into one single PDF file that can be delivered to your client or student. When they receive the PDF Portfolio you can package it and brand it with your company's colors or logo. Your recipient then receive I highly stylized PDF portfolio with easy to use navigation that can be opened with the free Adobe Acrobat Reader (version 9 or X) and presented with the files in the order that you wold like to present them in. Perfect for a business or educational portfolio which displays a range of different content and media. Adding video and Flash content is easier than ever and allows you to bring your documents to life with video playing inside your PDF portfolios.

Sending your PDF documents just got a lot easier with the advent of the new Adobe Service called Adobe SendNow Online. Adobe SendNow Online. is now integrated within Adobe Acrobat X and can be accessed from the Share tab. Adobe SendNow Online, as you can tell from the name, stores your files in the cloud and provides a link to your PDF that you can email to your recipients right within Adobe Acrobat X. If you have ever had the problem of sharing large PDF files via email, then you will really like how Adobe handles this new feature. Simply enter the recipients email address and they will receive a link to download the file. It is really that simple and you can control how much time they have before the link expires and receive delivery receipts when it is downloaded. The integration of Adobe SendNow Online with Adobe Acrobat X is really seamless and you will be asking yourself how did you ever live without it.

Working with Adobe Acrobat as much as I do, I am extremely pleased with this upgrade and the thought that went into making this easier and more intuitive to use. Right out of the box you will find Adobe Acrobat X a pleasure to work with. With a little time you will find that Adobe Acrobat X is one of thiose must have applications that you will turn to for all of your creative needs.

PS: Look for another post on the Action Wizard and Forms coming soon

Saturday, November 27, 2010

Panther: Extending extents / Estendendo os extents

This article is written in English and Portuguese
Este artigo está escrito em Inglês e Português

English version:

Back to Panther... Although I'm not in the video on the right, I do love Informix. That doesn't mean I ignore some issues it has (or had in this particular case). One thing that always worries an Informix DBA is the number of extents in his tables. Why? Because it used to have a maximum amount, and because that maximum was pretty low (if compared with other databases). But what is an extent? In short an extent is a sequence of contiguous pages (or blocks) that belong to a specific table or table partition. A partition (a table has one or more partitions) has one or more extents.
Before I go on, and to give you some comparison I'd like to tell you about some feedback I had over the years from a DBA (mostly Oracle, but who also managed an Informix instance):
  • A few years ago he spent lots of time running processes to reduce the number of extents in a competitor database that had reach an incredible number of extents (around hundreds of thousands). This was caused by real large tables and real badly defined storage allocation
  • Recently I informed the same customer that Informix was eliminating the extent limits and the same guy told me that he was afraid that it could lead to situations as above. He added, and I quote, "I've always admired the way Informix deals with extents"
So, if a customer DBA is telling that he admires the way we used to handle extents, and he's afraid of this latest change, why am I complaining about the past? As in many other situations, things aren't exactly black and white... Let's see what Informix have done well since ever:

  1. After 16 allocations of new extents Informix automatically doubles the size of the next extent for the table. This decreases the number of times it will try to allocate a new extent. So, using only this rule (which is not correct as we shall see) if you create a table with a next size of 16K, you would reach 4GB with around 225 extents.
  2. If Informix can allocate a new extent contiguous to an already existing one (from the same table of course) than it will not create a new one, but instead it will extend the one that already exists (so it does not increase the number of extents). This is one reason why rule number one may not be seen in practice. In other words, it's more than probable that you can reach 4GB with less than 225 extents.
  3. In version 11.50 if I'm correct, a fix was implemented to prevent excessive extent doubling (rule 1). If the next extent size is X and the dbspace only has a maximum of Y (Y < X) informix will allocate Y and will not raise any error.
    If this happens many times, we could end up having a number of allocated pages relatively small, but a next extent size too big. There's a real problem in this: If in these circumstances we create another chunk in the same dbspace, and after that our table requires another extent, the engine could reserve a large part of the new (and possibly still empty) chunk to our table. This can be much more than the size already allocated to the table. To avoid this, extent doubling will only happen when there is a reasonable relation between the new calculated next extent size and the space effectively allocated to the table.
  4. Extent description in Informix have never been stored in the database catalog. This leads to simpler and efficient space management. Compared to other databases that used to do the management in the catalog, they tend to hit issues in the SQL layer and at the same time these were slower. One of our competitors changed that in their later versions, and DBAs have seen improvement with that (but they had to convert). Informix always worked the better way...
So, these are the good points. Again, why was I complaining? Simply because although Informix has done a pretty good job in preventing the number of extents to grow too much, we had a very low limit for the number of extents. In platforms with a system page size of 2K this was around 220-240 extents (max), and for 4K platforms is was around the double of that (440-480). With version 10 we started to be able to have greater page sizes, and that increases the maximum number of extents per partition.
Why didn't we have a fix limit and why was it different in several platforms? In order to explain that we must dive a bit deeper in the structure of a dbspace and tables. Each partition in Informix has a partition header. Each partition header is stored in exactly one Informix page. There is a special partition in every dbspace (tablespace tablespace) that holds every partition headers from that dbspace. This special partition starts in a specific page but then, it can contain more than one extent.
Also important to understand this is the notion of slot. Most Informix pages contain several slots. For a data page, a slot contains a row (in the simplest cases). A partition header is a page that contains 5 slots:
  1. Slot 1
    This contains the partition structure (things like creation date, partition flags, maximum row size, number of special columns - VARCHAR and BLOB -, number of keys - if it's an index or has index pages -, number of extents and a lot of other stuff. If you want to see all the details check the sysptnhdr table in $INFORMIXDIR/etc/sysmaster.sql. It's basically an SQL interface for the partition headers in the instance.
    In version 11.50 this slot should occupy 100 bytes. Previous versions can have less (no serial 8, and bigserial)
  2. Slot 2
    Contains the database name, the partition owner, the table name and the NLS collation sequence
  3. Slot 3
    Contains details about the special columns. If there are no special columns this slot will be empty
  4. Slot 4
    Contains the description for each key (if it's an index or a mix). Starting with version 9.40, by default the indexes are stored in their own partitions. This was not the case in previous versions. A single partition could contain index pages interleaved with data pages.
    Currently, by default, a partition used for data should not have any key, so this slot will be empty
  5. Slot 5
    Finally, this is the slot that contains the list of extents.
Now we know the list of extents must be stored in the partition header. And the partition header has 5 slots, and the size of first four may vary. This means that the free space for slot 5 (extent list) is variable. These are the reasons why we had a limit and why that limit was not fixed. It would vary with the table structure for example. And naturally it would vary with the page size.
A table that reached it's maximum number of extents was a very real and serious problem in Informix. If you reach the table limit for number of extents and all your table's data pages are full, the engine would need to allocate one more extent in order to complete new INSERTs. But for that it would require some free space in the partition header. If there was no space left, any INSERT would fail with error -136:

-136 ISAM error: no more extents.

After hitting this nasty situation there were several ways to solve it, but all of them would need temporary table unavailability, which in our days is rare... We tend to use 24x7 systems. Even systems that have a maintenance window would suffer with this, because most of the time the problem was noticed during "regular" hours...

So, I've been talking in the past... This used to be a problem. Why isn't it a problem anymore? Because Panther (v11.7) introduced two great features:
  1. The first one is that it is able to automatically extend the partition header when slot 5 (the extent list) becomes full. When this happens it will allocate a new page for the partition header that will be used for the extent list. So you should not see error -136 caused by reaching the extent limit. At first you may think like my customer DBA: "wow! Isn't that dangerous? Will I get tables/partitions with tens of thousands of extents?". The answer is simple. You won't get that high number of extents because all the nice features that were always there (automatic extent concatenation, extent doubling...) are still there. This will just avoid the critical situation where the use of the table would become impossible (new INSERTs). And naturally it doesn't mean that you should not care about the number of extents. For performance reasons it's better to keep them low
  2. The second great feature is an online table defragmenter. They can grow forever, but that's not good. Once you notice you have a table with too many extents you can ask the engine to defragment it. I will not dig into this simply because someone else already did it. I recommend you check the recent DeveloperWorks article entitled "Understand the Informix Server V11.7 defragmenter"

Versão Portuguesa:

De volta à Panther... Apesar de não estar no vídeo à direita, eu adoro o Informix. Isso não significa que ignore alguns problemas que ele tem (ou tinha neste caso particular). Uma coisa que preocupa qualquer DBA Informix é o número de extents das suas tabelas. Porquê? Porque esse número tinha um máximo, e porque esse máximo era bastante baixo (se comparado com outras bases de dados). Mas o que é um extent? De forma simples, um extent é uma sequência contígua de páginas (ou blocos) que pertencem a uma tabela ou partição de tabela. Uma partição (uma tabela tem uma ou mais partições) tem um ou mais extents.
Antes de continuar, e para estabelecer uma comparação, gostaria de transmitir algum feedback que ao longo de anos tive de um DBA (essencialmente Oracle, mas também geria uma instância Informix):

  • Há alguns anos atrás passou bastante tempo a executar processos para reduzir o número de extents de uma base de dados concorrente do Informix. Essa base de dados tinha tabelas que atingiram um número incrível de extents (na casa das centenas de milhar). Isto foi causado por tabelas verdadeiramente grandes e cláusulas de alocação de espaço realmente mal definidas
  • Recentemente informei esse mesmo cliente que o Informix ia eliminar o limite de extents, e a mesma pessoa disse-me que tinha receio que isso pudesse levar a situações como a de cima. Ele acrescentou, e cito: "Se há coisa que sempre admirei foi a maneira como o Informix gere os extents".
Assim sendo, se um DBA de um cliente diz que admira a maneira como geríamos os extents e ele próprio receia a eliminação de limites, porque é que eu me queixo do passado? Como em muitas outras situações, as coisas não são bem a preto e branco... Vejamos o que o Informix sempre fez bem:

  1. Após cada 16 novos extents adicionados, o Informix automaticamente duplica o tamanho do próximo extent da tabela. Isto diminui o número de vezes que tenta reservar um novo extent. Usando apenas esta regra (o que não é correcto como veremos a seguir), se criar uma tabela com o extent mínimo (16KB), a tabela pode crescer até aos 4GB com cerca de 225 extents.
  2. Se o Informix conseguir reservar um novo extent que seja contíguo a um que já esteja alocado à mesma tabela, então em vez de criar um novo, vai alargar o já existente (portanto não aumenta o número de extents). Esta é a razão porque a regra anterior pode não ser verificada na práctica. Por outras palavaras é mais que provável que consiga atingir os 4GB com menos de 225 extents.
  3. Salvo algum erro, na versão 11.50 foi introduzida uma melhoria para prevenir a duplicação excessiva do tamanho do próximo extent (regra 1). Se o tamanho do próximo extent for X, mas o dbspace só tiver um máximo de Y espaço livre contíguo (Y < X) o Informix vai criá-lo com o tamanho Y e nem se queixará de nada. Se isto acontecer muitas vezes, podemos acabar por ter um número de páginas efectivas de uma tabela ou partição relativamente pequeno e um tamanho para o próximo extent muito grande. Existe um problema real nisto: Se nessas cirunstâncias for criado um novo chunk nesse dbspace, e a tabela precisar de novo extent, pode acontecer que o motor reserve grande parte, ou mesmo a totalidade do novo chunk para a tabela (possivelmente muito mais que o tamanho já reservado até então). Para evitar isto, a duplicação do tamanho do próximo extent só acontece quando o novo tamanho tem uma relação razoável com o espaço reservado até então. Caso contrário o tamanho do próximo extent a alocar não é duplicado.
  4. A informação dos extents em Informix nunca foi guardada nas tabelas de catálogo. Isto faz com que a sua gestão seja mais simples e eficiente. Comparada com outras bases de dados que faziam a gestão no catálogo, estas tendiam a encontrar problemas e constrangimentos próprios da camada de SQL, e ao mesmo tempo eram mais lentas. Um dos concorrentes mudou isto há umas versões atrás e os seus utilizadores viram benefícios bem notórios (mas tiveram de converter). O Informix sempre trabalhou da melhor forma...
Estes são os pontos positivos. Mais uma vez, porque é que me estava a queixar? Simplesmente porque apesar de o Informix sempre ter feito um trabalho extraordinário na prevenção contra um elevado número de extents, nós tinhamos um limite, e era muito baixo. Em plataformas com um tamanho de página de sistema de 2KB este limite rondava os 220-240 extents, e em plataformas de 4KB o limite era cerca do dobro (440-480). Com a versão 10 pudemos passar a ter páginas maiores, e isso aumenta o limite.
Porque é que o limite não é fixo, e porque é diferente consoante a plataforma? Para explicar isto temos de nos debruçar de forma mais detalhada na estrutura física de um dbspace e tabela. Cada partição em Informix tem um cabeçalho. Cada cabeçalho de partição é guardado numa página Informix. Existe uma partição especial em cada dbspace (designada habitualmente por tablespace tablespace) que guarda todos os cabeçalhos das partições criadas nesse dbspace. Esta partição especial começa numa página específica do primeiro chunk do dbspace, mas pode ter mais que um extent.
Igualmente importante para compreender isto é a noção de slot. A maioria das páginas Informix estão divididas em slots. Para uma página de dados um slot contém uma linha de dados da tabela (caso mais simples). Um cabeçalho de partição é uma página que contém cinco slots:

  1. Slot 1
    Este contém a estrutura da partição (coisas como a data de criação, flags, tamanho máximo de uma linha, numéro de colunas ditas especiais - VARCHAR e BLOBs -, número de chaves - se for um indíce ou tiver páginas de indíce -, número de extents e uma série de outros detalhes. Se tiver curiosidade em saber o que lá está guardado consulte a tabela sysptnhdr na base de dados sysmaster (ou o ficheiro $INFORMIXDIR/etc/sysmaster.sql). Basicamente esta tabela é um interface SQL sobre todos os cabeçalhos de partição da instância Informix.
    Na versão 11.50 este slot ocupa 100 bytes. Versões anteriores podem ocupar menos (ausência do serial8 e bigserial)
  2. Slot 2
    Contém o nome da base de dados, dono da partição, nome da tabela e a NLS collation sequence
  3. Slot 3
    Contém detalhes sobre todas as colunas especiais. Se não existirem colunas especiais (VARCHAR e BLOB) este slot estará vazio. Se existirem, o tamanho ocupado dependerá da estrutura da tabela.
  4. Slot 4
    Contém a descrição de cada chave (se for um índice ou um mix). Desde a versão 9.40, por omissão os indíces são guardados em partição à parte. Isto não era assim em versões anteriores. Uma partição podia conter páginas de indíces e de dados.
    Actualmente, por omissão, uma partição usada para dados não deve ter nenhuma chave, e assim este slot deve estar vazio
  5. Slot 5
    Finalmente, este é o slot que contém a lista dos extents.

Agora sabemos que a lista de estents tem de ser guardada no cabeçalho da partição. E este contém cinco slots sendo que o tamanho dos primeiros quatro varia. Isto implica que o espaço livre para o slot cinco (a lista de extents) é variável. Estas são as razões porque tinhamos um limite e porque esse limite era variável. Variava por exemplo com a estrutura da tabela. E naturalmente variava com o tamanho da página.
Uma tabela que atingisse o número máximo de extents tornava-se num problema sério em Informix. Quando tal acontece, se todas as páginas de dados estiverem cheias, o motor terá de reservar um novo extent para completar novos INSERTs. Mas para isso necessitaria de espaço livre no cabeçalho da partição. Portanto, não havendo aí espaço livre todos os INSERTs falhariam com o erro -136:

-136 ISAM error: no more extents.

Depois de batermos nesta situação havia várias formas de a resolver, mas todas elas necessitavam de indisponibilidade temporária da tabela, o que nos dias que correm é um bem raro... A tendência é usarmos sistemas 24x7. Mesmo sistemas que tenham janela de manutenção sofreriam com isto, porque na maioria das vezes o problema manifestava-se durante o horário normal ou produtivo...

Bom, mas tenho estado a falar no passado.... Isto costumava ser um problema. Porque é que já não o é? Porque a versão 11.7 (Panther) introduziu duas excelentes funcionalidades:

  1. A primeira é que a partir de agora é possível estender automaticamente o cabeçalho da partição quando o slot cinco enche. Nesta situação, uma nova página é reservada para o cabeçalho da partição e a lista de extents pode crescer. Portanto não deverá voltar a ver o erro -136 causado por atingir o limite de extents. À primeira vista pode ter a mesma reacção que o DBA do meu cliente. "Epa! Mas isso não é perigoso? Vou passar a ter tabelas/partições com dezenas de milhares de extents?". A resposta é simples. Não vai atingir esses números de extents porque todas as boas características que sempre existiram (junção automática de extents, duplicação de tamanho do próximo extent...) ainda estão presentes e funcionais. Isto apenas evitará a situação crítica em que o uso da tabela se tornava impossível (para novos INSERTs). E naturalmente isto não significa que passemos a ignorar o número de extents das nossas tabelas. Por questões de desempenho é melhor mantê-los baixos.
  2. A segunda grande funcionalidade é um desfragmentador online de tabelas ou partições. O número de extents pode crescer indefinidamente, mas isso não é bom. Assim que notar que tem uma partição com um número elevado de extents pode pedir ao motor que a desfragmente. Não vou aprofundar este tema, simplemente porque já foi feito. Recomendo que consulte um artigo recente do DeveloperWorks intitulado "Understand the Informix Server V11.7 defragmenter". Infelizmente o artigo só tem versão em Inglês

Wednesday, November 24, 2010

UDRs: In transaction? / Em transacção?

This article is written in English and Portuguese
Este artigo está escrito em Inglês e Português

English version:

Introduction

I just checked... This will be post 100!!!... I've never been so active in the blog... We have Panther (full of features that I still haven't covered), I did some work with OAT and tasks that I want to share, and besides that I've been trying some new things... Yes... Although I've been working with Informix for nearly 20 years (it's scaring to say this but it's true...) there are aspects that I usually don't work with. I'd say the one I'm going to look into today is not used by the vast majority of the users. And that's a shame because:
  1. It can solve problems that we aren't able to solve in any other way
  2. If it was more used, it would be improved faster
Also, many people think that this is what marked the decline of the Informix company. You probably already figured out that I'm talking about extensibility. To recall a little bit of history, in 1995, Informix had the DSA architecture in version 7. And it acquired the Illustra company, founded by Michael Stonebraker and others. Mr. Stonebraker already had a long history of innovation (which he kept improving up to today) and he stayed with Informix for some years. All the technology around Datablades and extensibility in Informix comes from there... Informix critics say that the company got so absorbed in the extensibility features (that it believed would be the "next wave") that it loosed the market focus. Truth is that the extensibility never became a mainstream feature neither in Informix or in other databases, and all of them followed Informix launch of Universal Server (1996): Oracle, IBM DB2 etc.

But, this article will not focus on the whole extensibility concept. It would be impossible and tedious to try to cover it in one blog article. Instead I'll introduce one of it's aspects: User Defined Routines (UDRs), and in particular routines written using the C language.

There is a manual about UDRs, and I truly recommend that you read it. But here I'll follow another approach: We'll start with a very simple problem that without C UDRs would be impossible to solve, define a solution for it, and go all the way to implement it and use it with an example.


The problem

Have you ever faced a situation where you're writing a stored procedure in SPL, and you want to put some part of it inside a transaction, but you're afraid that the calling code is already in a transaction?
You could workaround this by initiating the transaction and catching the error (already in transaction) with an ON EXCEPTION block.
But this may have other implications (ON EXCEPTION blocks are tricky when the procedure is called from a trigger). So it would be nice to check if the session is already in a transaction. A natural way to do that would be a call to DBINFO(...), but unfortunately current versions (up to 11.7.xC1) don't allow that. Meaning there is no DBINFO() parameter that makes it return that information.


Solution search

One important part of Informix extensibility is the so called Datablade API. It's a set of programmable interfaces that we can use inside Datablades code and also inside C UDRs. The fine infocenter has a reference manual for the Datablade API functions. A quick search there for "transaction" makes a specific function come up: mi_transaction_state()
The documentation states that when calling it (no parameters needed) it will return an mi_integer (let's assume integer for now) type with one of these values:
  • MI_NO_XACT
    meaning we don't have a transaction open
  • MI_IMPLICIT_XACT
    meaning we have an implicit transaction open (for example if we're connected to an ANSI mode database)
  • MI_EXPLICIT_XACT
    meaning we have an explicit transaction open
This is all we need conceptually.... Now we need to transform ideas into runnable code!

Starting the code

In order to implement a C UDR function we should proceed through several steps:
  1. Create the C code skeleton
  2. Create the C code function using the Datablade API
  3. Create a makefile that has all the needed instructions to generate the executable code in a format the engine can use
  4. Compile the code
  5. Use SQL to define the new function, telling the engine where it can find the function and the interface to call it, as well as the language and other function attributes
  6. Test it!



Create the C code skeleton

Informix provides a tool called DataBlade Developers Kit (DBDK) which includes several components: Blade Manager, Blade Pack and Bladesmith. Blade Manager allows us to register the datablades against the databases, the Blade Pack does the "packaging" of all the Datablades files (executable libraries, documentation files, SQL files etc.) that make up a datablade. Finally Bladesmith helps us to create the various components and source code files. It's a development tool that only runs on Windows but can also be used to prepare files for Unix/Linux. For complex projects it may be a good idea to use Bladesmith, but for this very simple example I'll do it by hand. Also note I'm just creating a C UDR. These tools are intended to deal with much more complex projects. A Datablade can include new datatypes, several functions etc.
So, for our example I took a peek at the recent Informix Developer's Handbook to copy the examples.

Having looked at the examples above, it was easy to create the C code:


/*
This simple function returns an integer to the calling SQL code with the following meaning:
0 - We're not within a transaction
1 - We're inside an implicit transaction
2 - We're inside an explicit (BEGIN WORK...) transaction
-1 - Something unexpected happened!
*/

#include <milib.h>

mi_integer get_transaction_state_c( MI_FPARAM *fp)
{
mi_integer i,ret;
i=mi_transaction_state();
switch(i)
{
case MI_NO_XACT:
ret=0;
break;
case MI_IMPLICIT_XACT:
ret=1;
break;
case MI_EXPLICIT_XACT:
ret=2;
break;
default:
ret=-1;
}
return (ret);
}

I've put the above code in a C source file called get_transaction_state_c.c

Create the makefile

Again, for the makefile I copied some examples and came up with the following. Please consider this as an example only. I'm not an expert on makefile building and this is just a small project.


include $(INFORMIXDIR)/incl/dbdk/makeinc.linux

MI_INCL = $(INFORMIXDIR)/incl
CFLAGS = -DMI_SERVBUILD $(CC_PIC) -I$(MI_INCL)/public $(COPTS)
LINKFLAGS = $(SHLIBLFLAG) $(SYMFLAG)


all: get_transaction_state.so

clean:
rm -f get_transaction_state.so get_transaction_state_c.o


get_transaction_state_c.o: get_transaction_state_c.c
$(CC) $(CFLAGS) -o $@ -c $?

get_transaction_state.so: get_transaction_state_c.o
$(SHLIBLOD) $(LINKFLAGS) -o $@ $?


Note that this is a GNU Make makefile. The first line includes a makefile that IBM supplies with Informix. It basically contains variables or macro definitions. You should adapt the include directive to your system (the name of the makefile can vary with the platform) and make sure that the variables I use are also defined in your system base makefile.
After that I define some more variables and I create the makefile targets. I just want it to build the get_transaction_state.so dynamically loadable library and for that I'm including the object (get_transaction_state_c.o) generated from my source code (get_transaction_state_c.c). Pretty simple if you have basic knowledge about makefiles

Compile the code

Once we have the makefile we just need to run a simple command to make it compile:


cheetah@pacman.onlinedomus.net:informix-> make
cc -DMI_SERVBUILD -fpic -I/usr/informix/srvr1150uc7/incl/public -g -o get_transaction_state_c.o -c get_transaction_state_c.c
gcc -shared -Bsymbolic -o get_transaction_state.so get_transaction_state_c.o
cheetah@pacman.onlinedomus.net:informix->
The two commands run are the translation of the macros/variables defined in the makefile(s) and they simply compile the source code (1st command) and then generate the dynamic loadable library. If all goes well (as it naturally happened in the output above), we'll have a library on this location, ready for usage by Informix:


cheetah@pacman.onlinedomus.net:informix-> ls -lia *.so
267913 -rwxrwxr-x 1 informix informix 5639 Nov 23 22:06 get_transaction_state.so
cheetah@pacman.onlinedomus.net:informix-> file *.so
get_transaction_state.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (GNU/Linux), dynamically linked, not stripped
cheetah@pacman.onlinedomus.net:informix->

Use SQL to define the function

Now that we have executable code, in the form of a dynamic loadable library, we need to instruct Informix to use it. For that we will create a new function, telling the engine that it's implemented in C language and the location where it's stored. For that I created a simple SQL file:


cheetah@pacman.onlinedomus.net:informix-> ls *.sql
get_transaction_state_c.sql
cheetah@pacman.onlinedomus.net:informix-> cat get_transaction_state_c.sql
DROP FUNCTION get_transaction_state_c;

CREATE FUNCTION get_transaction_state_c () RETURNING INTEGER
EXTERNAL NAME '/home/informix/udr_tests/get_transaction_state/get_transaction_state.so'
LANGUAGE C;
cheetah@pacman.onlinedomus.net:informix->


So, let's run it...:


cheetah@pacman.onlinedomus.net:informix-> dbaccess stores get_transaction_state_c.sql

Database selected.


674: Routine (get_transaction_state_c) can not be resolved.

111: ISAM error: no record found.
Error in line 1
Near character position 36

Routine created.


Database closed.

cheetah@pacman.onlinedomus.net:informix->


Note that the -674 error is expected, since my SQL includes a DROP FUNCTION. If I were using 11.7 (due to several tests I don't have it ready at this moment) I could have used the new syntax "DROP IF EXISTS...".

So, after this step I should have a function callable from the SQL interface with the name get_transaction_state_c(). It takes no arguments and returns an integer value.

Test it!

Now it's time to see it working. I've opened a session in stores database and did the following:
  1. Run the function. It returned "0", meaning no transaction was opened.
  2. Than I opened a transaction and run it again. It returned "2", meaning an explicit transaction was opened
  3. I closed the transaction and run the function by the third time. As expected it returned "0"
Here is the output:


cheetah@pacman.onlinedomus.net:informix-> dbaccess stores -

Database selected.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

0

1 row(s) retrieved.

> BEGIN WORK;

Started transaction.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

2

1 row(s) retrieved.

> ROLLBACK WORK;

Transaction rolled back.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

0

1 row(s) retrieved.

>

We haven't seen it returning "1". That happens when we're inside an implicit transaction. This situation can be seen if we use the function in an ANSI mode database. For that I'm going to use another database (stores_ansi), and naturally I need to create the function there (using the previous SQL statements). Then I repeat more or less the same steps and the result is interesting:


cheetah@pacman.onlinedomus.net:informix-> dbaccess stores_ansi -

Database selected.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

0

1 row(s) retrieved.

> SELECT COUNT(*) FROM systables;


(count(*))

83

1 row(s) retrieved.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

1

1 row(s) retrieved.

>
If you notice it, the first execution returns "0". Since I have not done any operations there is no transaction opened. But just after a simple SELECT, the return is "1", meaning an implicit transaction is opened. This has to do with the nature and behavior of ANSI mode databases.
If you use them and you intend to use this function you must take that into consideration. Or you could simply map the "1" and "2" output of the mi_transaction_state() function return into simply a "1". This would signal that a transaction is opened (omitting the distinction between implicit and explicit transactions).

Final considerations

Please keep in mind that this article serves more as a light introduction to the C language UDRs than to solve a real problem. If you need to know if you're already in transaction (inside a stored procedure for example) you can use this solution, but you could as well try to open a transaction and capture and deal with the error inside an ON EXCEPTION block.

Also note that if this is a real problem for your applications, even if you're inside an already opened transaction, you can make your procedure code work as a unit, by using the SAVEPOINT functionality introduced in 11.50. So, in simple pseudo-code it would be done like this:

  1. Call get_transaction_state_c()
  2. If we're inside a transaction then set TX="SVPOINT" and create a savepoint called "MYSVPOINT" and goto 4)
  3. If we're not inside a transaction than set TX="TX" and create one. Goto 4
  4. Run our procedure code
  5. If any error happens test TX variable. Else goto 8
  6. If TX=="TX" then ROLLBACK WORK. Return error
  7. Else, if TX=="SVPOINT" then ROLLBACK WORK TO SAVEPOINT 'MYSVPOINT'. Return error
  8. Return success
After this introduction, I hope to be able to write a few more articles related to this topic. The basic idea is that sometimes it's very easy to extend the functionality of Informix. And I feel that many customers don't take advantage of this.
Naturally, there are implications on writing C UDRs. The example above is terribly simple, and it will not harm the engine. But when you're writing code that will be run by the engine a lot of questions pop up.... Memory usage, memory leaks, security stability... But there are answers to this concerns. Hopefully some of them (problems and solutions) will be covered in future articles.


Versão Portuguesa:

Introdução

Acabei de verificar.... Este será o centésimo artigo!!! Nunca estive tão activo no blog... Temos a versão Panther (11.7) (cheia de funcionalidades sobre as quais ainda não escrevi), fiz algum trabalho com tarefas do OAT que quero partilhar, e para além disso tenho andado a testar coisas novas... Sim... Apesar de já trabalhar com Informix há perto de 20 anos (é assustador dizer isto, mas é verdade...) há áreas de funcionalidade com as quais não lido habitualmente. Diria que aquela sobre a qual vou debruçar-me hoje não é usada pela maioria dos utilizadores. E isso é uma pena porque:
  1. Permite resolver problemas que não têm outra solução
  2. Se fosse mais usada seria melhorada mais rapidamente
Adicionalmente, existe muita gente que pensa que isto foi o que marcou o declínio da empresa Informix. Possivelmente já percebeu que estou a falar da extensibilidade. Para relembrar um pouco de história, em 1995, a Informix tinha a arquitectura DSA (Dynamic Scalable Architecture) na versão 7. E adquiriu a empresa Illustra, fundada por Michael Stonebraker e outros. O senhor Stonebraker já tinha um longo passado de inovação (que prossegue ainda actualmente) e permaneceu na Informix durante alguns anos. Toda a tecnologia à volta dos datablades e extensibilidade teve aí a sua origem.... Os críticos da Informix dizem que a companhia ficou tão absorvida pelas funcionalidades de extensibilidade (que acreditava serem a próxima "vaga") que perdeu o foco do mercado. A verdade é que a extensibilidade nunca se tornou em algo generalizado nem no Informix nem em outras bases de dados, e todas elas seguiram o lançamento do Informix Universal Server (1996): Oracle, IBM DB2 etc.

Mas este artigo não irá focar todo o conceito de extensibilidade. Seria impossível e entediante tentar cobrir tudo isso num artigo de blog. Em vez disso vou apenas introduzir um dos seus aspectos: User Defined Routines (UDRs), e em particular rotinas escritas usando linguagem C.

Existe um manual que cobre os UDRs, e eu recomendo vivamente a sua leitura. Mas aqui seguirei outra abordagem: Começarei com um problema muito simples, que sem um UDR em C seria impossível de resolver, definirei uma solução para o mesmo, e prosseguirei até à implementação e utilização da solução com um exemplo.

O problema

Alguma vez esteve numa situação em que estivesse a escrever um procedimento em SPL, e quisesse colocar parte dele dentro de uma transacção, mas tivesse receio que o código que chama o procedimento já estivesse com uma transacção aberta?

Poderia contornar o problema iniciando uma transacção e apanhando o erro (already in transaction) com um bloco de ON EXCEPTION

Mas isto teria outras implicações (os blocos de ON EXCEPTION podem criar problemas se o procedimento for chamado de um trigger). Portanto seria bom poder verificar se a sessão já está no meio de uma transacção. Uma forma natural de o fazer seria recorrer à função DBINFO(...), mas infelizmente as versões actuais (até à 11.7.xC1) não permitem isso. Ou seja, não há nenhum parâmetro desta função que permita obter a informação que necessitamos.

Pesquisa da solução

Uma parte importante da extensibilidade no Informix é a chamada Datable API. É um conjunto de interfaces programáveis que podemos usar dentro de datablades e também dentro de UDRs em C. O infocenter tem um manual de referência das funções do Datablade API. Uma pesquisa rápida por "transaction" faz aparecer uma função: mi_transaction_state()

A documentação indica que quando a chamamos (não requer parâmetros) irá retornar um valor do tipo mi_integer (equivale a um inteiro) com um destes valores:
  • MI_NO_XACT
    significa que não temos uma transacção aberta
  • MI_IMPLICIT_XACT
    significa que temos uma transacção implícita aberta (por exemplo se estivermos conectados a uma base de dados em modo ANSI)
  • MI_EXPLICIT_XACT
    significa que temos uma transacção explícita aberta
Isto é tudo o que necessitamos conceptualmente.... Agora precisamos de transformar uma ideia em código executável!

Começando o código

Para implementarmos um UDR em C necessitamos de efectuar vários passos:
  1. Criar o esqueleto do código C
  2. Criar a função com código C usando o datablade API
  3. Criar um makefile que tenha todas as instruções necessárias para gerar o código executável num formato que possa ser usado pelo motor
  4. Compilar o código
  5. Usar SQL para definir uma nova função, indicando ao motor onde pode encontrar a função, o interface para a chamar bem como a linguagem usada e outros atributos da função
  6. Testar!

Criar o código em C

O Informix fornece uma ferramenta chamada DataBlade Developers Kit (DBDK) que incluí vários componentes: Blade Manager, Blade Pack e Bladesmith. O Blade Manager permite-nos registar datablades em bases de dados, o Blade Pack faz o "empacotamento" de todos os ficheiros de um datablade (bibliotecas executáveis, ficheiros de documentação, ficheiros SQL, etc.). Finalmente o Bladesmith ajuda-nos a criar vários componentes e código fonte. É uma ferramenta de desenvolvimento que apenas corre em Windows mas que pode ser usado para preparar ficheiros para Unix/Linux. Para projectos complexos será boa ideia usar o Bladesmith mas para este exemplo simples farei tudo à mão. Apenas estou a criar um UDR em C. Estas ferramentas destinam-se a lidar com projectos muito mais complexos. Um Datablade pode incluir novos tipos de dados, várias funções etc.
Assim, para o nosso exemplo dei uma espreitadela ao recente Informix Developer's Handbook para copiar alguns exemplos.

Depois de ver os exemplos referidos, foi fácil criar o código em C:
/*
This simple function returns an integer to the calling SQL code with the following meaning:
0 - We're not within a transaction
1 - We're inside an implicit transaction
2 - We're inside an explicit (BEGIN WORK...) transaction
-1 - Something unexpected happened!
*/

#include <milib.h>

mi_integer get_transaction_state_c( MI_FPARAM *fp)
{
mi_integer i,ret;
i=mi_transaction_state();
switch(i)
{
case MI_NO_XACT:
ret=0;
break;
case MI_IMPLICIT_XACT:
ret=1;
break;
case MI_EXPLICIT_XACT:
ret=2;
break;
default:
ret=-1;
}
return (ret);
}

Coloquei o código acima num ficheiro chamado get_transaction_state_c.c


Criar o makefile

Também para o makefile, limitei-me a copiar alguns exemplos e gerei o seguinte. Por favor considere isto apenas como um exemplo. Não sou especialista em construção de makefiles e isto é apenas um pequeno projecto.


include $(INFORMIXDIR)/incl/dbdk/makeinc.linux

MI_INCL = $(INFORMIXDIR)/incl
CFLAGS = -DMI_SERVBUILD $(CC_PIC) -I$(MI_INCL)/public $(COPTS)
LINKFLAGS = $(SHLIBLFLAG) $(SYMFLAG)


all: get_transaction_state.so

clean:
rm -f get_transaction_state.so get_transaction_state_c.o


get_transaction_state_c.o: get_transaction_state_c.c
$(CC) $(CFLAGS) -o $@ -c $?

get_transaction_state.so: get_transaction_state_c.o
$(SHLIBLOD) $(LINKFLAGS) -o $@ $?


Este makefile destina-se ao GNU Make. A primeira linha incluí um makefile fornecido pela IBM com o Informix. Este, basicamente, contém definições de variáveis e macros. Deve adaptar a directiva include ao seu sistema (o nome do makefile pode variar com a plataforma) e garanta que as variáveis que usei estão definidas no makefile base do seu sistema.
Depois disso defini mais algumas variáveis e criei os targets. Apenas quero que gere a biblioteca dinâmica get_transaction_state.so e para isso estou a incluir o objecto (get_transaction_state_c.o) gerado a partir do meu código fonte (get_transaction_state_c.c). Bastante simples se tiver conhecimentos básicos de makefiles.


Compilar o código

Depois de termos o makefile apenas necessitamos de um comando simples para executar a compilação:


cheetah@pacman.onlinedomus.net:informix-> make
cc -DMI_SERVBUILD -fpic -I/usr/informix/srvr1150uc7/incl/public -g -o get_transaction_state_c.o -c get_transaction_state_c.c
gcc -shared -Bsymbolic -o get_transaction_state.so get_transaction_state_c.o
cheetah@pacman.onlinedomus.net:informix->
Os dois comandos executados são a tradução dos macros/variáveis definidos no(s) makefiles(s), e apenas compilam o código fonte (primeiro comando) e depois geram a biblioteca dinâmica. Se tudo correr bem (como naturalmente aconteceu no output acima), teremos a biblioteca nesta localização, pronta a ser usada pelo Informix:
cheetah@pacman.onlinedomus.net:informix-> ls -lia *.so
267913 -rwxrwxr-x 1 informix informix 5639 Nov 23 22:06 get_transaction_state.so
cheetah@pacman.onlinedomus.net:informix-> file *.so
get_transaction_state.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (GNU/Linux), dynamically linked, not stripped
cheetah@pacman.onlinedomus.net:informix->

Usar SQL para definir a função

Agora que temos o código executável, na forma de uma biblioteca dinâmica, precisamos de instruir o Informix para usá-la. Para isso vamos criar uma nova função, dizendo ao motor que está implementada em linguagem C e qual a localização onde está guardada. Faremos isso com um script SQL simples:

cheetah@pacman.onlinedomus.net:informix-> ls *.sql
get_transaction_state_c.sql
cheetah@pacman.onlinedomus.net:informix-> cat get_transaction_state_c.sql
DROP FUNCTION get_transaction_state_c;

CREATE FUNCTION get_transaction_state_c () RETURNING INTEGER
EXTERNAL NAME '/home/informix/udr_tests/get_transaction_state/get_transaction_state.so'
LANGUAGE C;
cheetah@pacman.onlinedomus.net:informix->


Vamos executá-lo...:


cheetah@pacman.onlinedomus.net:informix-> dbaccess stores get_transaction_state_c.sql

Database selected.


674: Routine (get_transaction_state_c) can not be resolved.

111: ISAM error: no record found.
Error in line 1
Near character position 36

Routine created.


Database closed.

cheetah@pacman.onlinedomus.net:informix->


Repare que o erro -674 é expectável, dado que o meu SQL incluí a instrução DROP FUNCTION (e ela ainda não existe). Se estivesse a usar a versão 11.7 (devido a vários testes não a tenho operacional agora) podia ter usado a nova sintaxe "DROP IF EXISTS".

Portanto depois deste passo, devo ter uma função que pode ser chamada pela interface SQL com o nome get_transaction_state_c(). Não recebe argumentos e retorna um valor inteiro.


Testar!

Agora é tempo de a ver a trabalhar. Abri uma sessão na base de dados stores e fiz o seguinte:
  1. Corri a função. Retornou "0", o que significa que não havia transacção aberta
  2. Depois abri uma transacção e executei a função novamente. Retornou "2", o que significa que uma transacção explícita estava aberta
  3. Fechei a transacção e corri a função pela terceira vez. Como esperado retornou "0"
Aqui está o output:


cheetah@pacman.onlinedomus.net:informix-> dbaccess stores -

Database selected.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

0

1 row(s) retrieved.

> BEGIN WORK;

Started transaction.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

2

1 row(s) retrieved.

> ROLLBACK WORK;

Transaction rolled back.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

0

1 row(s) retrieved.

>

Não vimos o retorno "1". Isso acontece quando estamos dentro de uma transacção implícita. Esta situação pode ser vista se a função estiver a ser executada numa base de dados em modo ANSI. Para isso vou usar uma outra base de dados (stores_ansi), e naturalmente necessito de criar a função aqui (usando as instruções SQL anteriores). Depois repito mais ou menos os mesmos passos e o resultado é interessante:

cheetah@pacman.onlinedomus.net:informix-> dbaccess stores_ansi -

Database selected.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

0

1 row(s) retrieved.

> SELECT COUNT(*) FROM systables;


(count(*))

83

1 row(s) retrieved.

> EXECUTE FUNCTION get_transaction_state_c();


(expression)

1

1 row(s) retrieved.

>
Se reparar, a primeira execução retornou "0". Como ainda não tinha efectuado nenhuma operação não havia transacção aberta. Mas logo a seguir a um simples SELECT já retorna "1", o que significa que uma transacção implícita estava aberta. Isto prende-se com a natureza e comportamento das bases de dados em modo ANSI.
Se as usa e pretende usar esta função terá de ter isto em consideração. Ou poderá simplesmente mapear o retorno "1" e "2" da função mi_transaction_state() no retorno "1" da função criada por si. Isto sinalizaria que uma transacção estava aberta (omitindo a distinção entre transacção implícita e explícita).

Considerações finais

Relembro que este artigo serve mais como introdução aos UDRs na linguagem C que propriamente para resolver um problema real.
Se necessitar de saber se já está numa transacção (dentro de uma stored procedure por exemplo) pode usar esta solução, mas também podia tentar abrir uma transacção e capturar e gerir o erro dentro de um bloco ON EXCEPTION.

Chamo a atenção também para que se isto é um problema real das suas aplicações, mesmo que esteja já dentro de uma transacção, pode fazer com que o código da sua stored procedure trabalhe como uma unidade, usando a funcionalidade dos SAVEPOINTs, introduzida na versão 11.50. Em pseudo-código seria feito assim:

  1. Chamar a get_transaction_state_c()
  2. Se estamos dentro de uma transacção então estabelecer TX="SVPOINT" e criar um savepoint chamado "MYSVPOINT". Ir para 4)
  3. Se não estamos dentro de uma transacção então estabelecer TX="TX" e criar uma. Ir para 4)
  4. Correr o código da procedure
  5. Se ocorrer algum erro testat a variável TX. Ir para 8)
  6. Se TX =="TX" então ROLLBACK WORK. Retornar erro.
  7. Senão, SE TX=="SVPOINT" então ROLLBACK WORK TO SAVEPOINT 'MYSVPOINT'. Retornar erro
  8. Retornar sucesso
Depois desta introdução espero conseguir escrever mais alguns artigos relacionados com este tópico. A ideia base é que algumas vezes é muito fácil estender a funcionalidade base do Informix. E sinto que muitos clientes não tiram partido disto.
Naturalmente há implicações em escrever UDRs em C. O exemplo acima é terrivelmente simples, e não trará prejuízo ao motor. Mas quando escrevemos código que será executado pelo motor surgem uma série de questões... Utilização de memória, fugas de memória, segurança, estabilidade.... Mas existem respostas para estas preocupações. Espero que alguns (problema e soluções) sejam cobertos em artigos futuros.