CakePHP 3 – DebugKit, where are you?

Debug Kit, one of the most awesome plugins for CakePHP, is now installed by default with the cake core.

However, I recently setup a brand new Cake 3 app, and DebugKit was nowhere to be seen in my app.

What gives?

Checking all the usual suspects, like making sure debug is true and the plugin is loaded in app/config/bootstrap.php

Plugin::load('DebugKit', ['bootstrap' => true]);

I resorted to checking out the CakePHP error log file (app/tmp/logs/error.log)… And lookly-look what we have there:

Warning: DebugKit not enabled. You need to either install pdo_sqlite, or define the "debug_kit" connection name.

Well, it just happens that not only my Cake install was fresh, but also my Ubuntu.

For me running:

sudo apt-get install php5-sqlite

.. fixed the problem.

But there are more stubborn cases and the solution for those is described here.

Cake3 … baby steps (step 2 — setting up)

m4s0n501

Now that we have cake installed, and our fresh application is up and running, it’s time to setup some things and prepare for development.

I am going to use good ol’ MySQL for my DB, so first of all I need to create a database called “todo” and provide the right login credentials in
todo/config/app.php
I am going to setup both the “default” and “test” connections, kind of like so:

'Datasources' => [
    'default' => [
      'className' => 'Cake\Database\Connection',
      'driver' => 'Cake\Database\Driver\Mysql',
      'persistent' => false,
      'host' => 'localhost',
      'username' => 'root',
      'password' => 'password',
      'database' => 'todo',
      'encoding' => 'utf8',
      'timezone' => 'UTC',
      'cacheMetadata' => true,

Next, I hear that cake migrations got even more awesome in the new version. Since they are leveraging Phinx, a very solid migrations tool. So I am definitely going to use cakephp migrations to keep my database changes organized.

The installation is now done via composer.
Just add the following line to your todo/composer.json

"cakephp/migrations": "dev-master"

You can place this in the require{} section.

Now, back to the terminal and your main app directory and simply run:

composer update

This should update any dependencies and install cakephp/migrations plugin.
Don’t forget to add Plugin::load(‘Migrations’); to your config/bootstrap.php

You can verify that the plugin is available by running bin/cake from your main application directory.

Let’s get our first migration setup, by running:

bin/cake migrations create Initial

Basically this will create a file for our first migration. My plan is to have a whole of one table for the application.

We can easily define the table and add it to the database using the migrations plugin.
Open up the migrations file in config/Migrations/[something]_initial.php and let’s take a look at the change() method.

This is where the definition of the table should go. Generally speaking change() method replaced up() and down() for the most part, you can still make use of all if you have to get very granular, but when it comes to creating and dropping (when reverting) a table… change() works perfectly well.

Let’s see how to define a table inside the change() method:

public function change()
    {
        // create the table
        $table = $this->table('todos');
        $table->addColumn('todo', 'string', ['limit' => 200])
              ->addColumn('created', 'datetime')
              ->addColumn('updated', 'datetime')
              ->addColumn('is_done', 'boolean')
              ->create();
    }

Notice, that I don’t even have to supply any information about a Primary Key column.
By default the migration will create and “id” column which is a PK and AUTO_INCREMENT’ed.
Sweet.

Give it a try:

bin/cake migrations migrate

If all goes well, you should now have a new table ready in your database.

Cake3 … baby steps (step 1 — getting started)

With the recent announcement of CakePHP 3 (beta 2) being released… it is really tempting to see where we are today.
And oh boy… how things have changed.

First things first, let’s get a hold of our the latest cake core and setup our local environment to start baking.

One thing that jumps out right away is the use of composer to install cake. This is not a requirement, but is certainly a welcome addition.

I just happen to have composer installed already, so once I “cd” into the directory of my choice, I am ready for CakePHP installation.

sudo composer self-update
composer create-project --prefer-dist -s dev cakephp/app todo

So what now?
We should have a new directory called “todo” (that’s just a name of my app). Inside you will notice a new directory called “vendor”, this is where all composer-compliant modules live by default.
Cake core is now located there and so are any other 3rd party libraries that you might use.

All of this goes to say that CakePHP now follows the same standard of autoloading (PSR-4) as many other popular PHP frameworks and libraries.
To some degree standardization, especially in loosely defined PHP world is a good thing.

Cake3 comes with its own web server, and you are welcome to test it at your leisurely time.

I’m going to go ahead and try to setup an apache virtual host and see if that works.

I need an entry in my /etc/hosts for my new “todo” app.

127.0.0.1     todo.local

Should do it…

To setup a virtual host (apache2), let’s cd to /etc/apache2/sites-available

We’ll create a new file called todo.local.conf, with the following contents:

<VirtualHost *:80>
    ServerName todo.local
    ServerAlias todo.local

    DocumentRoot /home/teknoid/work/todo/webroot
    <Directory /home/teknoid/work/todo/webroot>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
        Order deny,allow
        allow from all
    </Directory>
</VirtualHost>

Now let’s enable the site and reload the config:

sudo a2ensite todo.local
sudo service apache2 reload

If all goes well, you should have a fresh CakePHP 3 app when you visit http://todo.local/

Now, that we are ready to start baking… we’ll take a look at some additional setup and development tools to get us going in the next post.

I can’t get my estimations right… Top issues and remedies

Oh man, the topic of accurate estimations in SCRUM.

This is probably one of the most common battles between business owners and technology.

I’d like to point out one thing: the estimates are never accurate.
But how do you estimate? Story points? Hours? Both?

It always seems like whenever you work with a new project a SCRUM approach (especially when it comes to estimations) is handled slightly differently. Some teams value estimations and watch the burn-down chart daily. Some just slap on a number and hope for the best.

I’m going to say that based on my personal experience the number one issue with poor estimations is extremely poorly written user stories.

Without much delay I’d like to talk about issues that can help improve many SCRUM implementations, and even if it doesn’t work for your team… maybe an idea will be born from something listed below.

Let’s take a look…

1. Lack of well defined acceptance criteria

Besides getting something done it is also important to make sure that was done (programmed) is what was intended to be delivered and consequently accepted.

Here’s a typical example of a user story:

Summary:
As a user, I can view the report of all orders and filter them by date, name, and other criteria

Description:
User should have the ability to access the report at:
host/admin/orders/
– filter by date (from – to)
– filter by name
– filter by amount (from – to)
– columns should be sortable

Awesome, right?
Wrong.

Even if I am a developer intimately familiar with the system, I would not be able to estimate very accurately.
As a developer I would have two options:
1. Go on a hunt for missing information (let’s see below)
2. Make a bunch of assumptions (can’t spell assumption without ass)..

Let’s make a bunch of assumptions.. shall we?

The report is in the “admin” section, the user should be logged in as admin (let’s hope there’s only one tier of admin users and only that one tier should be logged in.. we don’t know)
I presume that the report should look like the other ones in the system so far, and make my best attempt at the design.

Four days later the report is loading, it looks good, the filters are working the user has to be logged in as admin. Columns are sortable. Not much to add.
Time to demo the story to the stakeholders… after some interesting conversation the story is rejected.

2. A story that requires research

Why did the above story get rejected?
Well it simply didn’t pass the acceptance criteria set by the business owners. Too bad we found out about this later.
The pagination was off… and apparently only shipping managers and super-admins should have had access…
There were a couple of other minor issues… but one of the required columns was missing (and apparently the data we need is stored in a completely arbitrary table elsewhere).

The number of reasons the story was rejected could go on forever.

The real question is how can we anticipate all these changes/addition ahead of time?
Simple answer: involve business early.

More complicated answer would be to introduce “spikes” into your sprints. This becomes a business practice not a “let’s try to have a meeting about this” type of deal.
Spikes are usually viewed as oddities and negative things. Perhaps so.
But introduction of spikes is often necessary to weed out the problems.

The idea of course is that over time you need to rely less on spikes and the user stories become better defined.

Then comes the question of accountability. If during research and analysis you don’t have an agreed upon design, a clear definition of what should be on the report and who should access it… then you’ve done your analysis poorly. And hopefully you would know so ahead of time, because a spike just like a regular user story should be estimated, demonstrated and accepted. Whether it’s a Photoshop mock-up or a working prototype it’s best to get feedback early and often. Spikes help to accomplish exactly that.

As developer it is simply best to find out where your development user story is headed before you get to coding. If the business is happy with the proposed direction, then chances are the outcome will be as desired.

Getting “on the same page” early is good and necessary.

3. We forgot about QA

Yes we did. QA is not there to break things… if developers aren’t working with QA early and often, then something is going to naturally break eventually.
I know how to develop and how to make something work. But I don’t have desire nor time to figure out all the scenarios and permutations required to potentially break something or otherwise invalidate the user story.

Going back to the point of acceptance criteria… As a developer, QA should be your first line of defense against poorly written acceptance criteria. During your pre-planning meeting, review of backlog or at the worst case at the beginning of the sprint as developer it should be a necessity to meet with QA and discuss the story at hand.
I need to know what the QA will be looking for so I can write the code in such a way that it would pass QA acceptance.

With that said, I hope that my research for the user story has been thorough enough and I can actually estimate the task much more accurately.
If I still feel that we have unanswered questions it is best to introduce a new spike or escalate the story ASAP.

Communicating problems in SCRUM is better before the beginning of the work than after.

And remember to include QA time in your estimations. If you have 4 days (approximately) to work on a task, you better not start 4 days before the end of the sprint, or the story simply won’t make it. Consider that QA needs 2 days to test and there’s a day of defect fixing. So the total estimation cannot be done just from the developer’s point of view.

Not only that, a story that might be simple to code could be much more complicated to test and that should be accounted for whether in time, resource or complexity estimations.

4. Technology needs a strong voice

But what happens if all of the above was done right, yet business owners still reject the user story.

Well, then simply put its their own fault. Yet often there isn’t anybody to defend the developer.
If business owners were involved in the process from the start, they simply have no room to say that this is “not what we wanted”.
Granted, anyone could say what they want, and that the previous designs and discussions were just a fabrication and food for thought… At which point I would seriously consider employment elsewhere.

However, that’s not really the solution.

Either during planning or acceptance someone who represents technology (without the fear of being fired) needs to point out where things went off course and how to mitigate the issue going forward. Hopefully those responsible will behave better next time and crisis will be avoided.

But in case of emergency break glass, any glass… and look for a new job.

5. Business owners are a part of the team

There should not be such a fight between business and technology departments. But there almost always is.
Somebody has to be at fault and someone has to be right.

Fair enough.

But if you can learn something from negotiations, is that the sooner you get the other side to “work on your team” — the better. The quicker any negotiation will come to a reasonable resolution.

This should be much easier inside a company. After all you are all working on the same team, presuming that you all agree with the company’s vision and believe in the product. If not, that’s a different story.

To reiterate my earlier point, the sooner technology gets business owners into the process of development the better. If business owners do not have time, they need to designate a resources that would be responsible for being there all the way through. From idea to implementation.

Without such intertwining of business and technology it would be nearly impossible to establish clear communication and well delivered products.

6. If it feels too complicated, it probably is

If it can be tested and delivered it should be a user story.

Yes, a single function can and should be its own user story when appropriate.

Sometimes it is tempting to say that a chunk of code by itself is useless because it doesn’t do anything. Well, let me ask you again… can it be tested? Does it perform any sort of logic? Did you actually attempt do write any unit tests for it? Chances are it is not as useless as you think.

Let’s take a look at an example…

A user story might require us to create a table add a couple of methods for data processing to our model or DAO and display this data in the UI somewhere.

While this sounds like one cohesive unit it really is not.
If you take a TDD approach (as you should), you can break this up into smaller testable units.

– release script/migration to create the table
– unit test and fixture data for method one
– production code for method one
– unit test and fixture data for method two
– production code for method two
– ui design
– front-end test

That’s a lot of “testable” stuff that can come out of a single user story. And even if each one of those pieces cannot become its own user story, it can certainly be split up into at least two or three.

Even with everything above working systematically and perfectly we’ll never achieve a consistent perfect-world scenario, where there are no emergencies or otherwise re-scoped sprints. Things change and blow up and priorities shift.

As developer I need to make sure I am always aware of what I am supposed to deliver, and that I am relatively protected because I am not working on my own in the dark corner, but make my work transparent, demo-able and involve others to be aware of what is being done from the start of the process to the end.

To conclude, I’d like to address all the managers out there… If you have to ask: “What is going on with this story?”, or “Where is that story?” (other than in a daily stand-up) then you are a doing a poor job “managing”. Manager (scrum master, project manager) should be equipped with the necessary tools to answer that question. It is not developer’s job to report on the progress. Simply put if you are not aware of what’s going on with your team, then you need to seriously re-evaluate your approach to communication and your current business processes.

Work with multiple remotes and share your code with Git

It is pretty often that we need to work with a team of developers.
Sometimes we need to share our code; and with Git having the ability to work with multiple “remotes” is really convenient.

Think of the “remote” as simply the URL of the repo where you push your updates.

Let’s dive into an example…

Mike has been doing some development in his feature/adding-new-report branch.
It would be awesome for me to work in my own branch (as any good git user will do), yet I would also love to be able to easily sync our work.

Thankfully with some modern tools and “Pull requests” available on most git hosting sites (github, stash, bitbucket, etc.), we can do it very easily.
The method below would also work just fine without “pull requests” but for the simplicity and an extra step of sanity-checks, I recommend using PR’s when possible.

So now we can consider a typical setup.

You have a central repo at the remote (destination) called “origin”, where the code of your application lives. When we refer to “origin” in git, we are really talking about some destination (or URL) pointing to the git repo somewhere out there.
With that said, it is also not uncommon for each developer to clone the main repo, work on their own copy and then submit their changes by merging the changes in, or creating a “pull request”.

We can easily add other remotes besides “origin” (which is, by the way, just a name given to the initial repo destination. It can be changed, moved and you can add more destinations as we’ll see below).

Let me add Mike’s remote so I can checkout code from his repo (not just “origin”).

git remote add mike git@github.com:mike/my-first-repo.git

Now we have a new remote called “mike”

I can create a local branch to sync up with Mike’s changes:

git branch --track feature/adding-new-report mike/feature/adding-new-report

Now we have a local branch called feature/adding-new-report. It is “connected” (tracking) with Mike’s branch in his repo.
Pretty cool.

Now we should get the latest changes into our local branch. (Presuming we are now on our local feature/adding-new-report branch).

git pull mike feature/adding-new-report

(Got the latest code from Mike).

Now we can make any changes and commit them as usual.

At this point we have a couple of options. We could just push the changes up to Mike’s branch/repo.
Additionally (and preferably, in my opinion) we could do the following:

git push origin feature/adding-new-report

This will create the same branch in our own remote repo (presuming “origin” points to the URL of my development repo). This enables us to create a pull request, which will clearly show the differences between my changes and the current state of Mike’s branch. (if we were diligent about keeping things up-to-date only the necessary changes without conflicts should be shown in the diff.)

Overall there are a few benefits to using pull requests. You can easily see what was changed, the PR can be updated, based on review (you can add multiple reviewers) and it keep things a lot more cleaner and organized. (Dealing with conflicts is a bit easier).

For reference here’s github’s overview of working with Pull Requests.

CakePHP and jQuery auto-complete revisited

CakePHP 2.3
jQuery 1.10.2
jQuery UI 1.10.3

I’ve realized that my old post about jQuery auto-complete and cake is still pretty popular, but hopelessly outdated.

Therefore, I figured it would be a good time to revisit that old post and give it an update.
We’ve come so far! A lot has changed in the jQuery world
since then.

For this tutorial we will create a single auto-complete field using jQuery and jQuery UI.
Although to show off some features of CakePHP we will also create a model a controller a view and a JSON response (more on that later).

The goal is simple, we’ll have a field where we’ll type some name of a car maker. If at least one character was entered, we’ll show suggestions using jQuery UI’s auto-complete widget.

First we’ll start with the schema and some data.

Let’s create our cars table.

CREATE  TABLE `test`.`cars` (
  `id` INT NOT NULL AUTO_INCREMENT ,
  `name` VARCHAR(45) NULL ,
  `created` VARCHAR(45) NULL ,
  `modified` VARCHAR(45) NULL ,
  PRIMARY KEY (`id`) )
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8
COLLATE = utf8_unicode_ci;

Now let’s populate it with some popular brands:

INSERT INTO `cars`
(name, created, modified)
VALUES
( 'Aston Martin', now(), now() ),
( 'Acura', now(), now() ),
( 'Audi', now(), now() ),
( 'Bentley', now(), now() ),
( 'Bmw', now(), now()),
( 'Bugatti', now(), now() ),
( 'Buick', now(), now() ),
( 'Cadillac', now(), now() ),
( 'Chevrolet', now(), now() ),
( 'Chrysler', now(), now() ),
( 'Dodge', now(), now() ),
( 'Ferrari', now(), now() ),
( 'Ford', now(), now() ),
( 'Gmc', now(), now()),
( 'Honda', now(), now() ),
( 'Hyundai', now(), now() ),
( 'Infiniti', now(), now() ),
( 'Jaguar', now(), now() ),
( 'Jeep', now(), now() ),
( 'Lamborghini', now(), now() ),
( 'Lexus', now(), now() ),
( 'Lincoln', now(), now() ),
( 'Maserati', now(), now() ),
( 'Mazda', now(), now() ),
( 'Mercedes-Benz', now(), now() ),
( 'Mitsubishi', now(), now() ),
( 'Tesla', now(), now() ),
( 'Nissan', now(), now() ),
( 'Porsche', now(), now() ),
( 'Rolls Royce', now(), now() ),
( 'Subaru', now(), now() ),
( 'Tesla', now(), now() ),
( 'Toyota', now(), now() ),
( 'Volkswagen', now(), now() ),
( 'Volvo', now(), now() )

Let’s go ahead and create a new layout for this application. It will be pretty simple, let’s do something like this (this will be a new file in View/Layout/basic.ctp):

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Sample App</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta name="description" content="">
    <meta name="author" content="">
    <?php
      echo $this->Html->css('https://code.jquery.com/ui/1.10.3/themes/smoothness/jquery-ui.css');
    ?>
  </head>

  <body>
  <?php echo $content_for_layout; ?>

  <!-- our scripts will be here -->
  <?php echo $scripts_for_layout; ?>
  </body>
</html>

To give some style to the auto-complete field and the “suggest” drop-down, we’ll add the CSS file form jQuery’s built-in themes.

Next we’ll create a simple controller in Controllers/CarController.php:

<?php
  class CarsController extends AppController {

    public $layout = 'basic';

    public function index() {

    }
  }

The only thing we do differently from our standard controller setup, is specifying the layout… which matches the file name above (minus the .ctp part).
We are leaving the index() action empty for now.

And finally let’s take a look at the view in View/Cars/index.ctp:

<?php
  //let's load jquery libs from google
  $this->Html->script('https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js', array('inline' => false));
  $this->Html->script('https://ajax.googleapis.com/ajax/libs/jqueryui/1.10.3/jquery-ui.min.js', array('inline' => false));

  //load file for this view to work on 'autocomplete' field
  $this->Html->script('View/Cars/index', array('inline' => false));

  //form with autocomplete class field
  echo $this->Form->create();
  echo $this->Form->input('name', array('class' => 'ui-autocomplete',
               'id' => 'autocomplete'));
  echo $this->Form->end();

First, we load our jQuery libs from Google. Next, notice $this->Html->script(‘View/Cars/index’, array(‘inline’ => false)); this tells CakePHP that we need to load a JavaScript file from our webroot/js/View/Cars/index.js. I recommend keeping your .js files in a similar directory structure as your .ctp files.

Because we have array(‘inline’ => false) as a second argument, our script will be included in place of the $scripts_for_layout.

This pretty much completes our CakePHP setup. We now need some code to retrieve data from our DB and some JavaScript code to act on our “#autocomplete” field.
As you’ve probably guessed, this JS code will be located in webroot/js/View/Cars/index.js:

(function($) {
  $('#autocomplete').autocomplete({
        source: "/cars/index.json"
  });
})(jQuery);

That’s it… One thing to note here is the path “/cars/index.json”. By adding the .json extension to our request URL we’ll utilize cake’s built-in JSON View and format the response as JSON (or JSONP).

Let’s take a look at that now. We will need to beef up our Controller just a little:

<?php
  class CarsController extends AppController {

    public $layout = 'basic';

    public $components = array('RequestHandler');

    public function index() {
      if ($this->request->is('ajax')) {
        $term = $this->request->query('term');
        $carNames = $this->Car->getCarNames($term);
        $this->set(compact('carNames'));
        $this->set('_serialize', 'carNames');
      }
    }
  }

One thing you’ll notice is that we’ve added a RequsetHandler component. This is the magic in CakePHP that will properly handle our request from jQuery and allow us to set our response as a JSON object. You can find out more details about how ‘_serialize’ and RequestHandler work by reading up in the manual.

It is important to note that in your routes file you’ll need to enable the parsing of extensions. (i.e. index.json).
Simply edit app/Config/routes.php and add the following line to the file:

Router::parseExtensions();

Next you see that I am getting the list of model names from our Car model in the method called getCarNames().
This is because I’m trying to follow the golden rule of MVC: “fat models, skinny controllers”.
Although it’s easy to leave all the car-name-finding logic in the controller (and not have to create a model at all!), we’ll presume good architecture here and create a model to handle our data finding needs.

Here we go (app/Model/Car.php):

<?php
  class Car extends AppModel {

    public function getCarNames ($term = null) {
      if(!empty($term)) {
        $cars = $this->find('list', array(
          'conditions' => array(
            'name LIKE' => trim($term) . '%'
          )
        ));
        return $cars;
      }
      return false;
    }
  }

I use a standard find(‘list’) method of CakePHP to get car names from our table above. The data is returned in an array formatted in a way so that becomes very easy to return as a JSON object back to our jQuery. You can see that in the controller above.
First we set a variable for the view (as you’d do for any view) and then you “serialize” it to become a JSON object.

(By creating a Car model cake automatically associated it with “cars” table. Even if I didn’t actually crate a Car model file, cake would still be able to execute basic model methods as all of our methods extend the built-in core Model. This topic is a bit more advanced and you can find out more about by studying he API or checking up on our friendly IRC channel).

In conclusion, we have everything we need to have a fully functional auto-complete using CakePHP and jQuery/jQuery UI.
If you were to type-in “f” in the input field, you’d get a list with “Ferrari” and “Ford”.

Easy CakePHP logging to FireBug with DebugKit and FirePHP

CakePHP 2.x

This is espeically useful when you are working with ajax or just in general you don’t want to dump the data to the screen, which is often hard to read.

Using FireBug and FirePHP and CakePHP is pretty nicely detailed here.

Presuming all of that is working well, let’s see how to implement something nice and simple for our app.

First, let’s add a custom function to app/Config/bootstrap.php

/**
 * https://cakephp.lighthouseapp.com/projects/42880/docs-firecake
 *
 * @param  mixed $data     data to log
 * @param  string $logLevel log, info, warn, error
 * @param  string $label    give your data some label
 *
 */

function fclog($data, $logLevel = 'log', $label = null) {
    if (Configure::read() > 0) {
        App::uses('FireCake', 'DebugKit.Lib');
        FireCake::$logLevel($data, $label);
    }
}

Now we can use it anywhere in our app like so:

$myData = $this->SomeModel->find('all');
fclog($myData, 'error', 'Oh no we have a lot of data!');

The second argument (in this case “error”) has to correspond to one of the logging methods (log, info, warn, error).

If all goes well, You should see a nicely formatted array of data in your firebug console.

Object mockery and Cake

A few times we already had gone over unit testing… and why it is a good thing to do.

If you had been interested in this topic for a while, then I’m sure you’ve come across the concept of “Mock Objects”.
But really, what are they? And more importantly how do we use them?

So, to get started let’s look at the “official” mock object definition.

http://en.wikipedia.org/wiki/Mock_object

In object-oriented programming, mock objects are simulated objects that mimic the behavior of real objects in controlled ways. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts.

Actually, this is a pretty good analogy. It does make sense and lets us work out a real example.

In CakePHP there are several examples to work with “mocked” objects, especially when emulating some core things, like Session, Email, etc.
This is already nicely covered in the manual.

So what about some real world scenario?

I’d say the first thing that happens very often in the real world, is that you have to retrofit your unit tests to “prove” that an already existing production code is working.
In other words, you don’t write the tests up-front, but you need to write tests for an existing production/application code.

Let’s look at an example… we can imagine a random model with a method like this one:

public function checkIt($id = null) {
        $provider = $this->getProvider($id);

        if ($provider) {
            $info = $provider->getInfo();

            if ($info == '200') {
                return 'got 200';
            } else {
                return 'did not get 200';
            }
        }

        return false;
    }

If we are about to write a unit test for this method, then the first thing we have to figure out is… just what is $provider.. ?
Looking at the code it seems to drive all the little logic in this method.

Once we have a provider, we’d call its method getInfo() to see what sort of response we get back.

This seems simple enough, but what if our $povider needs a bunch of keys and access to a certain IP-restricted area and even to getInfo(), it costs $15 per query? :(

We can’t test or utilize the real $provider Class/Object… so this is a good time to just “mock” one up.

If we look at the code that needs to be tested, we don’t really care how $provider works internally. (Yes, it should be tested also.. but this test has nothing to do with internal workings of the $provider).
Anyway, we do care that if we call the getInfo() method it should return ‘200’. This is the important logic of our application, which we are trying to test.

Sounds good, so if we can fake a return value of ‘200’, we can presume that the provider is doing what it is supposed to do and our unit test for the checkIt() method is done.

Let’s take a look at our unit test:

public function testCheckIt() {
  $provider = $this->getMock('OurProvider', array('getInfo'));
  $provider->expects($this->any())
            ->method('getInfo')
            ->will($this->returnValue('200'));
  $result = $this->Account->checkIt(1);

  $this->assertEquals('got 200', $result);
}

We just mocked up a simple $provider object. We gave it one method: getInfo().
$provider->expects($this->any())->method(‘getInfo’), basically tells our test that we are going to run this method. (If we look back to the model code: $info = $provider->getInfo(), then yeah this is exactly what is supposed to happen).

Of course we also expect a value of ‘got 200′ to be returned by our method.

In our mocked object, we ensure this by doing ->will($this->returnValue(‘200′));. As far as we are concerned as long as ‘200’ is returned and the if ($info == ‘200’) of our model’s method is evaluated correctly, then we are happy.

We expect the return ‘got 200′; to fire correctly in our unit test $this->assertEquals(‘got 200′, $result);. If we are going to run this test right now, it would unfortunately (fortunately) fail because the $provider object used by the model’s code is relying on the actual one, which probably has no proper way of running the getInfo() method.

So far, so good… but there is one obvious problem with our code and the test. The current model code uses the $provider that’s part of some model logic… and not at all the little object that we had mocked earlier.

Now then, we fall back on the technique, which I think is called Dependency Injection. This is exactly where some people have a problem with a tight coupling between unit tests and business logic. Albeit, harmless, there is still a few “extra” lines of code that we have to add to our model to properly test it.
Basically we have to “inject” our mocked $provider object.

public function checkIt($id = null, $provider = null) {
        if (is_null($provider)) {
            $provider = $this->getProvider($id);
        }
        if ($provider) {
            $info = $provider->getInfo();

            if ($info == '200') {
                return 'got 200';
            } else {
                return 'did not get 200';
            }
        }

        return false;
    }

Now we can pass the $provider object to the model’s checkIt() method…
Yeah, we’ll modify our test and make sure that we pass our mocked object to properly test the code:

public function testCheckIt() {
    $provider = $this->getMock('OurProvider', array('getInfo'));
    $provider->expects($this->any())
         ->method('getInfo')
         ->will($this->returnValue('200'));
    $result = $this->Account->checkIt(1, $provider);

    $this->assertEquals('got 200', $result);
  }

The test should be passing now.

I personally feel that the use of objects in such manners actually helps to properly test, but more importantly refactor the code. If we see that a Dependency Injection is required, for example, we have a few things to consider.

  1. Do we have good tests for our $provider
  2. Can we avoid Dependency Injection and possibly refactor the model code?

Improved cache and sessions storage with CakePHP and Redis

CakePHP 2.3 + Redis 2.6.7

For the vast majority of applications out there the issue of cache and session store doesn’t really come into play until the application is large enough to where it needs to be scaled to multiple web servers.
At that point filesystem (default cache and session store) becomes inadequate for a couple of reasons:

  1. You need a central location for sessions because maintaining user sessions on individual servers will be extremely painful. You’d have to persist the user to the same web node.
  2. If you clear/invalidate the cache, it has to be done on all servers consistently or your application will be in a very unpredictable state.
  3. Writing to filesystem is relatively slow.

Often a simple enough solution is to use the database to keep track of sessions across all web servers. This option is nicely integrated into cake, and doesn’t require much in terms of configuration (other than changing the default value from “php” to “database” in core.php).

Today we’ll look at setting up Redis to be used as a cache and sessions storage. Although we are not going leverage some of the advanced features, the benefits of having a very fast, memory-based store with out-of-the-box persistence are great enough for any application, whether you run off of one server or twenty five.

Now that we know “why”… let’s see “how”…

First, we are going to install Redis server. For those on Ubuntu the installation process is honestly a little too easy:

apt-get install redis-server

Done.

Working with default config settings is recommended for a basic installation. (Cake will actually expect certain Redis defaults).

Now that we have our server installed and running, we need to tie PHP and Redis by using any of the available libraries.
The one I like is:
https://github.com/nicolasff/phpredis

The installation is rather simple also.
Clone the repository and run the three commands provided at the link above.

Finally, let’s configure our application to use Redis. All we have to do is modify core.php

First, change the value of $engine to Redis

$engine = 'Redis';

Next, let’s create a separate cache config for our sessions.

Cache::config('session', array(
    'engine' => $engine,
    'prefix' => $prefix . 'cake_session_',
    'duration' => $duration
));

Now we’ll tell cake to use the cache store for sessions as well. The “handler” key ties everything together.
(Ability to use cache store for sessions is an awesome addition to CakePHP and just as well you could use APC, memcached, or some other solution).

Configure::write('Session', array(
        'defaults' => 'cache',
        'timeout' => 100,
        'start' => true,
        'checkAgent' => false,
        'handler' => array(
            'config' => 'session'
        )
    ));

With all of this in place, we can now test our application to make sure everything is working as expected:

  1. CakePHP core Redis Engine tests should be passing
  2. No cache or sessions files are written to local file system
  3. We can run redis-cli and then “KEYS cake*” to see all cake-related keys in the data store. There should be cake_session_*, cake_model_*, cake_core_* keys listed. (This presumes you’ve tried to use the app by clicking around and possibly logging-in)