Code Monkey Club #KidsCoding

monkey

Code Monkey Club

Well I am back for my second year of holding lunch session with kids at River West Park teaching coding. I learned quite a bit from my first year and I’ve changed the offering a little bit. In my first year. I created the exercises in MineCraft. This was both good and bad. Good as it immediately got kids engaged with the exercises, but bad because the kids were more focused on doing things in the game instead of learning about coding. I got a lot of feedback that the kids loved the sessions but the kids then wanted to learn how to build entire worlds in MineCraft and create videos on YouTube. Clearly, I needed to think about this a little differently.

Code.org

The white knight for me were the excellent resources that are available for everyone on code.org. The exercises illustrate programming concepts and fundamentals and are available for everyone to use. They leverage existing games/themes like Flappy Bird, Star Wars, and MineCraft.

The best part is the interface that code.org has where the kids can drag and drop logic blocks and then see the results in playing the game immediately. The interface is ideal for getting kids excited and receiving positive feedback on their work. Even better, my kid’s school had already done some of the basic exercises so they knew the interface.

If you are interested, you can find the exercises at the following link

What I add

I’ve just had one session,  but so far the content allows for the kids to learn more and be more engaged! I get to spend less time managing/troubleshooting exercises and more time assisting the kids with the exercises. Since the kids have already taken some of the basic exercises, I am also able to focus on the exercises that reinforce/introduce more advanced programming fundamentals. This allows me to add insight and details on what is really happening behind the scenes with the program.

So I think this is a great set of exercises, but we will see as we work through them. But so far, they are engaging the kids more, teaching them new concepts, and allowing me to add detail and insight were appropriate.

And I think we are still having fun!

 

5 Books that changed the way I think

Over the years I have been lucky enough to find books that have profoundly changed the way I perceive the world and the way I think. I usually know when I have found one of these books when I read the book over and over. This is a list of those top 5 books in chronological order of when I read them. I’ve also included how I became aware of the book.

  1. Lord of the Rings – John Ronald Reuel Tolkien

Like most young adults I was a bit of a grazer when it came to reading. I was reading enough at school that I really wasn’t looking for books to lose myself in outside of school. And then I found Lord of the Rings. Funny enough, I actually stopped reading the first time in the Council of Elrond chapter. The book didn’t really grab me up to that point. The second time I read the book I made it past that chapter and I was hooked. I think I have read the book at least 6 times now. The book really taught me how important stories and the stories behind the stories are. In many ways Lord of the Rings taught me how to also be thoughtful. Great Book.

I became aware of Lord of the Rings just from other students in my classes. Funny enough, my brother was not a Lord of the Rings fan. I remember he was a fan of the Dune series. I think this was a generational thing,

2. Godel, Escher, Bach – Douglas Hofstadter

I still remember spending weekends cycling to the park and reading this 700 page tome on formal systems. This is perhaps where I really fell in love with Computer Science. This was probably the height of being a geek as well. For the love of me, I can’t remember how I became aware of this book.

This book really did teach me how to analyze and create formal systems and in many ways how to code. It also caused me to think at a higher level about intelligence and consciousness. Great book to think about big thoughts.

3. Zen and the Art of Motorcycle Maintenance – Robert Pirsig

One of the books I first spotted in my brother’s library. I picked it up and was hooked immediately. Zen taught me the beauty of introspection and thought. It also taught me to be respectful of people who may have psychological issues. This book explained the emotion and heartache of mental issues as I read. I found the entire book and story fascinating. To this day it probably is my favourite book and the first one I reach out to when I have spare time. Robert Pirsig’s second book Lila is equally good.

Under the covers the book also spoke to me a lot on Quality and what quality is. This has been very helpful in my work life.

4. Iron John – Robert Bly

The second book I spotted in my brother’s library. The perfect book for a man in his 30’s to read to find guidance on the changes a man will experience growing up. A great book that balanced embracing what it means to be a man in the face of much bashing of masculinity in the 90’s. This book gave me great confidence. Perhaps a little too much.

5. Epistulae morales ad Lucilium – Seneca the Younger

I think I found this book when I kept on seeing Seneca being quoted in other books. This is a philosophy book composed of letters from Seneca. I loved how the letters were grounded in reality and in actual stories. This combined with the Stoic philosophy really spoke to me and resonated with my beliefs. I found more confidence in aligning myself with the Stoic principles as I went through some challenges at work in and my personal life.

Best Work Book

The best work books I have read comes down to a choice between two:

  1. Beautiful Teams

Beautiful Teams is a wonderful book with each chapter describing a situation and how a team was structured to help address the situation. Great book with a lot of real world examples of what great teams can do.

2. Leading Geeks

Awesome book providing insight on how leading Software Development professionals is different from leading anyone else. A must read for any Software Development professional. Even if you are not leading other geeks, chances are your manager will have read this book!

Why I love coding

Recently I was asked by my wife to look into creating a little application to create a schedule for a softball season. I had done a bunch of coding within SQL, but hadn’t used a true programming language for quite a while. Most of my experience was with procedural languages and not Object Oriented languages. To be honest, I’ve always felt that Object Oriented languages are a bit bloated and that functionality might not be required for all solutions. I feel this is one of those solutions.

So I tried to search out a new procedural language I could play with. When I went to University and was gainfully employed, I was using procedural languages. Pascal, Fortran, Cobol, Natural, and C were familiar to me. After doing some searching I eventually I found Go. It seems to have the right mix of formality and informality that I was looking for to code this type of solution. My requirements were also pretty basic to just do some calculations and generate pretty basic output. So I started to install the tools and I started to code.

Enjoyment

I really enjoyed the process of coding and the joy was not unlike my son and how he enjoys playing Minecraft. It was all about being able to create things.

When I code, I take great pleasure is being able to essentially create a mini-model of the world and to be able to get that world to operate like I require. It is a bit of a kick to be able to control exactly how the computer program runs and what behavior gets executed. Even if that something is rudimentary like a Fibonacci sequence or generating prime numbers. Being able to duplicate a structure from the real world is fun, challenging, and provides a real sense of accomplishment.

Of course I’m remembering only the good times of my relationship with coding. Like any old flame, I’m sure I am forgetting those long nights arguing for hours about an improperly typed variable that caused a strange bug. But I am going to try to see if some of that old spark remains.

On the plus side, the IDE interfaces make all the languages look young and robust. How could I resist?

#minecraft code club – day 4 #compile #FTW

Untitled

I remember talking about it with friends. I didn’t want to teach kids to use Scratch. Scratch was ok, but I wanted to teach them how to code in Java. Even more, I wanted to teach them what it was to compile code and what a Compiler was. People thought that the content would not resonate with Grade 4’s. And that was the polite comment. 🙂

So here I was in the third Minecraft Code Club session but I dearly wanted to cover compilation. How to do best do it though?

What I did was start a discussion with the kids about how we read books, but computers read hex codes. I actually showed them some hex dumps. I know, I know – could be considered a punishment, but I kept it brief. In a few of those wonderful hex dumps, they had the additional column to the right that translates the hex codes into readable text to help the humans. Most of the kids got it that the two types of text could hold the same information.

And then drew the analogy that a Compiler was just a computer program that translates our code in words into the computers hex code. I know it is a vast oversimplification, but it resounded with most of them and I saw nodding heads. Feeling the opportunity, I then referred back to our first class where we talked about working with Minecraft Plugins was like giving a needle to the Minecraft program! Compiling a program is the action of giving that needle to the program. That is how we do it. Again a vast oversimplification, but probably appropriate for this level of knowledge.

I waited and watched – there still seemed to be acceptance of that could be right. Not wanting to push it any further we went on to the exercise of spawning an Enderdragon by modifying some code and compiling the code on my machine while they watched. The spawning of the Enderdragon is the ultimate way to drive a point home. After that was done I was able to sit back and let them have some time to play with the plugins and go on a couple of quests.

The moment

Then the moment came that was so rewarding. I got called over to the table as the kids were trying to spawn an Enderdragon.

“Mr Bunio, can you show us again how to give our Minecraft a needle so we can spawn an Enderdragon???”

“Yes, I most certainly can…”

#Minecraft Code Camp – Day 1

Well today was day 1 at Minecraft Code club at my kid’s school. I was lucky enough to have a parent council that was crazy enough to listen to my grand design for how I would teach kids coding by looking at Minecraft Plugins. Even crazier, I was going to do this over lunch in three 35 minute sessions.

And I wasn’t going to teach coding by using scratch or any other graphical coding language. Nope. Straight into Java. And DOS commands, did I mention the DOS commands?

And then just for shits and giggles I was going to do in on Windows 8 laptops.

Yep, I’m a glutton for punishment.

Day One

Today was day one. I was going to introduce the concepts of a server, a computer program and look at a plugin to build a house. I had 21 Grade 4 kids in a library with 7 laptops.

It was fabulous.

The kids had all the energy and passion I would have expected. But I didn’t expect the questions on:

  • Can I show them how to do this at home?
  • Can I help them build their own server?
  • Can I help them build the Taj Mahal? (From a Minecraft book) Yes some of them even came prepared with reference material!

We even encountered a bug. Or a glitch as they called it. And they loved every second of it.

I must admit, so did I. Through them I relived the first time in my basement when I typed my first Basic program into my Vic 20. I fell in love. For a kid who couldn’t control much, I could create worlds. Everything was at my fingertips. Suddenly I didn’t feel so small and helpless. I could accomplish anything. Coding and Goaltending gave me the confidence to do anything.

It’s that feeling I want to instill in a few of the boys and girls. Through these lunch time programs we can show what is possible and how to create worlds. Maybe, just maybe, we can create some sparks in a few that will generate a love for coding. Now that would be cool.

Now where did I put that torch?

Why do we #DataModel at all?

People in the Database world take Normalization and Data Modeling as something that should be done without question. I compare it to best practices like versioning software. No one expects that anyone would create software without version control anymore.But more often recently I do get questioned and challenged on why we need to normalize and model data. Is it even required with the cheap disk space, memory, and server capacity available ?

According to Wikipedia and others, the objective of normalization is:

“Database normalization is the process of organizing the fields and tables of a relational database to minimize redundancy and dependency. Normalization usually involves dividing large tables into smaller (and less redundant) tables and defining relationships between them. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.”

The rules of Normalization

Normal form

Brief definition

Violation

1NF A relation is in 1NF if and only if all underlying domains contain scalar values only First Normal Form is violated when the table contains repeating groups
2NF A relation is in 2NF if and only if it is in 1NF and every non-key attribute is irreducibly dependent on the primary key – every column must depend solely on the primary key Second Normal Form is violated when a non-key field is a fact about a subset of a key
3NF A relation is in 3NF if and only if it is in 2NF and every non-key attribute is non-transitively dependent on the primary key. Third normal form is violated when a non-key field is a fact about another non-key field
4NF Relation R is in 4NF if and only if, whenever there exist subsets A and B of the attributes of R such that the (nontrivial) MVD A->>B is satisfied, then all attributes of R are also functionally dependent on A. Fourth Normal Form is violated when a table contains two or more independent multi-valued facts about an entity. In addition, the record must satisfy third normal form.

In relational database theory, second and third normal forms are defined in terms of functional dependencies, which correspond approximately to our single-valued facts. A field Y is “functionally dependent” on a field (or fields) X if it is invalid to have two records with the same X-value but different Y-values. That is, a given X-value must always occur with the same Y-value. When X is a key, then all fields are by definition functionally dependent on X in a trivial way, since there can’t be two records having the same X value.

The Questions

Now that we have reviewed the objectives and rules of normalization, let us summarize. The objective of Normalization is to:

  1. Minimize Redundancy
  2. Minimize Dependency

But what if we have extra storage available and storing redundant copies of data is not a problem? In fact, it probably will speed up our query response time. What if we also don’t require frequent modification of data so that having de-normalized data won’t result in update or deletion anomalies caused by excessive data dependency? Why should we still model our data and normalize?

The three reasons to Data Model

Simplicity, Consistency and Integrity, and Future Flexibility.

Simplicity

Every one of the violations mentioned above would require extra code and extra unit tests to validate proper functioning. Depending on the amount of violations, this can become a severe amount of technical debt that will needlessly be required in the software. There is an entire movement dedicated to the elimination of If statements. (www.antiifcampaign.com) Software that is Data Driven rather than Condition Driven is simpler and easier to maintain over the life of the application.

An application that is Data Driven can also automate the creation of their test cases to validate proper functioning of the application. This combined with the enhanced simplicity greatly adds to the quality of the application.

Consistency and Integrity

Even if the solution being modeled can accommodate redundant data and has the potential for minimal update and deletion anomalies currently, significant risk is being assumed by having these potential situations in your data model. How can you ensure that redundant data will be kept in sync and that update and deletion anomalies do not get introduced in the future as people and requirements change? Either this is through additional software development code or by additional processes and latent knowledge in resident experts. Neither of these situations are a good use of time and energy.

This is an example of an application-centric view of the data model. Unfortunately, not all activity on the Data Model can be guaranteed to always go through the application. Data Fixes, Conversions, and enhancements all have the ability to bypass the application’s business logic and compromise the integrity of the data. All it takes is one high value client with inaccurate or inconsistent data to irreparably harm a company’s reputation and revenue stream.

Future Flexibility

Solutions that are data driven and do not have excessive functional dependencies are much easier to evolve in the future. For example, I may have a business requirement to split one account type or combine account types. This type of conversion will be quite routine if I have modeled my data properly and minimized dependencies. If not, the conversion can be quite convoluted and I will probably need to evaluate code before I can determine the implications of making such a change. Then I have to be sure I address and update all the redundant code throughout the application. Just because the situation doesn’t exist currently with update and deletion anomalies doesn’t mean those situations won’t happen in the future.  

In addition, these changes to split or combine account types would probably also require code changes. If the solution was Data Driven, the possibility of these code changes would be minimized. (not to say they would never be required, but the probability of code changes would be minimized)

Summary

A well designed application and user interface will be able to be used with minimal training. It just makes sense and models the clients current processes and tasks.

A well designed data model should also have the same intuitive qualities. It also makes sense and models the business’s data. Not how the application functions, but how the business exists. Modeling the data in this manner minimizes work currently to work with the data and in the future to accommodate change.

In Object Oriented parlance, the Data Model itself should be loosely coupled with high cohesion. Both the Object Model and Data Model should share this characteristic. (Although they will implement it in quite distinct ways)

#Agile Data Modeling – still a ways to go

I have wanted to write a Blog entry on Agile Data Modelling for a while now. It combines the two prime areas of interest for myself as I really started as a DBA/Data Architect and then moved on towards Project Management and Agile Project Management. But truly, Data Modelling has been and will always be my first true love. (Very appropriate that I am writing this article the day before Valentine’s day)

I am on a project currently where I am struggling to not fall back into the traditional ways I have done Data Modelling in the past. Since almost all of my Data Modelling experience has been on more traditional projects, it is easy to fall back into that pattern. Thanks to Scott Ambler and Steve Rogalsky for reminding me of how we can continue make Data Modelling more Agile.

More often than not, the areas of Database Design and Data Modelling has been one of the most resistant to Agile Methods. Recently I came across this Blog post by Tom Haughey on the Erwin site:

Agile Development and Data Modeling

In some ways, I thought Tom was quite Agile in his preference to segment Data Modelling projects into 3-6 month phases or increments  to help increase the chances for success. But other statements reminded me that we as Data Modellers still have ways to go before we have joined the rest of the Agile team.

Some of the concerning statements were:

“Data modeling has always been performed in an iterative and incremental manner. The data model has always been expanded and enriched in a collaborative manner. In my 28 years of involvement in data management, no qualities of data modeling have been more consistently reiterated, not even non-redundancy. It is absurd to imply that traditional data modeling is done in one continuous act or that it is done all upfront by an isolated team without involving Subject Matter Experts and without sensible examination of requirements.”

By this same definition one could also say all analysis has been iterative and incremental which we know is incorrect. I believe the misunderstanding may lie in what people define as an iteration. Of course Data Models are iterated as analysis and data design is done as more requirements are gathered. But is the data design part of an end-t0-end iteration where a segment of the data model is promoted to production and used? Or is there a horizontal iteration of creating a high level Enterprise Data Model before detailed data modeling is done? On almost all the projects I have been on the answer is a resounding no. There usually is a big bang implementation of the data model to the developers after months of analysis. If anything, Data Modelling tends to be more incremental than iterative.

“In summary, traditional data modeling is incremental, evolutionary and collaborative (and thereby agile) in its own right.”

Being incremental, evolutionary, and collaborative doesn’t necessary make you Agile. I also don’t know if you can ever achieve Agile as an end state as well. We are striving to be more Agile and I don’t believe the striving should ever end and we can rest because we are Agile.

“The implications of Agile proponents like Scott Ambler is that “the traditional approach of creating a (nearly) complete set of logical and physical data models up front or ‘early’ isn’t going to work.” One issue with a statement like this is what does “up front” or “early” mean. He says that the main advantage of the traditional approach is “that it makes the job of the database administrator (DBA) much easier – the data schema is put into place early and that’s what people use.” Actually, the main advantages are that it is a clear expression of business information requirements plus developers have a stable base from which to work.

This the one statement that perhaps is troubling. The desire to get a stable base for developers is very similar to trying to get a stable analysis base for developers. In my experience, Data Modelers can be perfectionists (like all great analysts) and they struggle with releasing something that is not fully done. But this goes against Agile. Just like other functionality, we should try data models early and often and get feedback on they are used and how they perform in production. We can then use that feedback to make the Data Models better as the project progresses. This example best highlights the difference between incremental and iterative Data Modelling:

Incremental – releasing stable sections of the Data Model for development and use. Limited changes to the Data Model are expected.

Iterative – releasing initial version of the Data Model for development and feedback to make the Data Model and future Data Models better. Moderate to significant changes to the Data Model are expected and embraced.

They say that it requires the designers “to get it right early, forcing you to identify most requirements even earlier in the project, and therefore forcing your project team into taking a serial approach to development.” On the contrary, data and process modeling, and thereby data design and program design, should be done in a flip-flop manner. You collaborate on the requirements, model some data, model some processes, and iterate this process till the modeling is done – using a white-board and Post”

Hopefully Tom Haughey will read this Blog post and clarify this statement with me. It does sound that there may be aspects of Iterative Data Modeling being proposed, but this conflicts with the earlier statement so I am unsure. The iterations still seem to be focused on the modeling and not explicitly incorporating development and having the functionality promoted and used by the clients in production. (The only true measure of value)

“But remember this. The traditional SDLC (System Development Life Cycle), whatever its faults, has successfully delivered the core systems that run business across the world. Imagine delivering a new large brokerage trading system in 2-week intervals, or going live with a space shuttle project 2-weeks at a time, or delivering a robotic systems for heart surgery in 2-week intervals. Much, but not all, of Agile development has focused on apps like web-based systems and smaller, non-strategic systems.”

I’m not sure if I agree with these comments. I guess it depends on how you define success. With the statistics of the Standish Chaos reports, I’m not sure how anyone can say the Traditional SDLC has successfully delivered core systems. It is true that is have delivered core systems, but many of those projects may not be defined as a success by the clients. The statement that Agile development has been focused on smaller, non-strategic systems is also concerning. I’ve personally used Agile on large, strategic systems. I’m sure many other people would agree.

“Database refactoring represents a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. Database refactoring is more difficult than code refactoring. Code refactorings only need to maintain behavior. Database refactorings also must maintain existing integrity and other business rules. The term “database” includes structural objects, such as tables and columns, and logic objects such as stored procedures and triggers.”

While I agree that refactoring a database can be complicated, the risk of extreme changes to the Data Model can be mitigated by creating a High Level Enterprise Data Model in Iteration 0. (and potentially other methods) Frequently people against Agile state that iterations start without some initial work and as such, changes can be drastic and complex. This is incorrect. Agile is a continuum and if small phases of foundation work have value, Agile encourages their use. I have found this method very valuable.

Having experience in both creating Enterprise Data Models and software code. I would say refactoring significant portions of either hurt. So I would recommend trying to minimize drastic changes by doing some upfront high level modeling. I would not say refactoring Data Designs are easier though. A major framework change would be much more intensive and invasive.

Summary

So where does Agile Data Modeling go from here? Given that this was a pretty recent article, I’d say that there still is quite a way to go to incorporate Agile Methods in Data Modeling Methods. The good news is that Agile Data Modeling has much to offer Agile Projects. We just need to help to promote the use of Iterative Data Modeling in addition to Incremental Data Modeling. (Incremental Data Modeling is still better than the alternative)