Increasing Postgres column name length

This blog is more like a bookmark for me, the solution was scavenged from internet. Recently I have been working on an analytics project where I had to generate pivot transpose tables from the data. Now this is the first time I faced the limitations set on postgres database. Since its a pivot, one of my column would be transposed and used as column names here, this is where things started breaking. Writing to postgres failed with error stating column names are not unique. After some digging I realized Postgres has a column name limitation of 63 bytes and anything more than that will be truncated hence post truncate multiple keys became the same causing this issue.

Next step was to look at the data in my column, it ranged from 20-300 characters long. I checked with redshift and Bigquery they had similar limitations too, 128 bytes. After looking for sometime found a solution, downloaded the postgres source, changed NAMEDATALEN to 301(remember column name length is always NAMEDATALEN – 1) src/include/pg_config_manual.h, followed the steps from postgres docs to compile the source and install and run postgres. This has been tested on Postgres 9.6 as of now and it works.

Next up I faced issues with maximum number columns, my pivot table had 1968 columns and postgres has a limitation of 1600 total columns. According to this answer I looked into the source comments and that looked quite overwhelming 😛 . Also I do not have a control over how many columns will be there post pivot so no matter whatever value i set , in future i might need more columns, so instead I handled the scenario in my application code to split the data across multiple tables and store them.

References:

  1. https://til.hashrocket.com/posts/8f87c65a0a-postgresqls-max-identifier-length-is-63-bytes
  2. https://stackoverflow.com/questions/6307317/how-to-change-postgres-table-field-name-limit
  3. https://www.postgresql.org/docs/9.6/install-short.html
  4. https://dba.stackexchange.com/questions/40137/in-postgresql-is-it-possible-to-change-the-maximum-number-of-columns-a-table-ca
Increasing Postgres column name length

Understanding RapidJson – Part 2

In my previous blog on Rapidjson, alot of people asked for a detailed example in the comments so here is part 2 of Understanding Rapidjson with a slightly detailed example. I hope this will help you all.

We will straightaway improve on my last example in the previous blog and modify the changeDom function to add more complex object to the DOM tree.


template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
Document subdoc(&d.GetAllocator());
subdoc.SetObject(); // starting the object
Value arr(kArrayType); // the innermost array
 Value::AllocatorType allocator;
for (unsigned i = 0; i < 10; i++)
arr.PushBack(i, allocator); // adding values to array , this function expects an allocator object
// adding the array to its parent object and so on , finally adding it to the parent doc object
subdoc.AddMember("New", Value(kObjectType).Move().AddMember("Numbers",arr, allocator), subdoc.GetAllocator());
d.AddMember("testing",subdoc, d.GetAllocator()); // finally adding the sub document to the main doc object
d["f"] = true;
d["t"].SetBool(false);
}

Here we are creating Value objects of type kArrayType and kObjectType and appending them to their parent node from innermost to outermost.

Before Manupulation
{
 "hello": "world",
 "t": true,
 "f": false,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
 0,
 1,
 2,
 3
 ]
}
After Manupulation
{
 "hello": "c++",
 "t": false,
 "f": true,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
    0,
    1,
    2,
    3
  ],
 "testing": {
     "New": {
         "Numbers": [
             0,
             1,
             2,
             3,
             4,
             5,
             6,
             7,
             8,
             9
         ]
     }
 }
}

The above changeDom can also be written using prettywritter object as follows:


template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
Document subdoc(&d.GetAllocator()); // sub-document
// old school write the json element by element
StringBuffer s;
PrettyWriter<StringBuffer> writer(s);
writer.StartObject();
writer.String("New");
writer.StartObject();
writer.String("Numbers");
writer.StartArray();
for (unsigned i = 0; i < 10; i++)
writer.Uint(i);
writer.EndArray();
writer.EndObject();
writer.EndObject();
subdoc.Parse(s.GetString()); // Parsing the string written to buffer to form a sub DOM

d.AddMember("testing",subdoc, d.GetAllocator()); // Attaching the Sub DOM to the Main DOM object
d["f"] = true;
d["t"].SetBool(false);
}

Happy Coding! Cheers.

More reads:
https://stackoverflow.com/questions/32896695/rapidjson-add-external-sub-document-to-document

Understanding RapidJson – Part 2

Glow LEDs with Google Home

Recently I tried experimenting with Google Home, trying to voice control LEDs. Majorly the whole thing can be split into two parts,

  1. A custom command that makes a web POST request to fetch the result.
  2. A simple Flask app that can receive post request with parameters and glow some LEDs based on the POST request data.

For the part one, the custom commands were possible thanks to Google Actions Apis.  I used API.AI for my purpose since they had good documentation. I wont go into detail explaining the form fields in Api.ai, they have done a good job with documentation and explaining part, I will just share my configurations screenshot for your quick reference and understanding. In Api.ai the conversations are broken into intents.  I used one intent (Default Welcome Intent) and a followup intent (Default Welcome Intent – custom) for my application.

top-intents

Heres my first intent which basically greets the user and asks for a LED colour when the custom command “glow LEDs” is activated.

intent1

As you can see the User says is what defines my command , you can add multiple statements in which you want to activate the command. The Action and Contexts is set when you create a followup Intent. Text response is the part which your Google Home will use as response.

Next is the Followup Intent which basically takes the User response as input context (which is handled automatically when you create the followup intent) and looks for required parameters and tries to process the request.

user_interaction

Here the expected User says would be a colour (red, blue, green) is what I allowed. In Api.ai you can use their ML to process the speech and find your needed parameters and values. I needed colours hence used @sys.color. Their are other entities like @sys.address or @sys.flight etc. If these entities don’t serve your purpose then you might want to go vanilla and process the speech on your web-api end. The later part of the Followup Intent is a bit different, we are fulfilling the user request via web-hook here. Here the Response is the fallback response incase the web request fails, the success response is received from web-hook response body.

home-response

The fulfilment option won’t be activated until you add your webhook in the Fulfillment section. Thats all for the part one. Also you can use Google Web Simulator to test your application On the Go.

webhook.png

In part two , I used a Raspberry Pi, 3 LEDs (red, blue, green) , a 1K ohm resistor some wires, a breadboard(optional)  and a T-cobbler Board(optional). Now, we will write a flask application that will accept a post request and turn on the required GPIO pin output high/low.


from flask import Flask, request, jsonify
import RPi.GPIO as GPIO
app = Flask(__name__)
BLUE = 12
RED = 13
GREEN = 18
base_response = {
'speech':"Abra Ka Dabra,{color} LED glowing",
'displayText' : "Abra Kaa Daabra, {color} LED glowing",
'source' : 'Manual'}
@app.route('/',methods=['GET','POST'])
def index():
if request.method == 'GET':
text = """WELCOME to RBG<br>
/red -> red LED<br>
/blue -> blue LED<br>
/green -> green LED<br>
/clear -> clear all<br>
"""
return text
else:
req_body = request.get_json()
color = req_body['result']['resolvedQuery']
if color == 'red':
red()
if color == 'green':
green()
if color == 'blue':
blue()
response = base_response.copy()
response['speech'] = response['speech'].format(color=color)
response['displayText'] = response['displayText'].format(color=color)
return jsonify(response)
@app.route('/red')
def red():
GPIO.output(BLUE,GPIO.LOW)
GPIO.output(RED,GPIO.HIGH)
GPIO.output(GREEN,GPIO.LOW)
return "RED"
@app.route('/green')
def green():
GPIO.output(BLUE,GPIO.LOW)
GPIO.output(RED,GPIO.LOW)
GPIO.output(GREEN,GPIO.HIGH)
return "GREEN"
@app.route('/blue')
def blue():
GPIO.output(BLUE,GPIO.HIGH)
GPIO.output(RED,GPIO.LOW)
GPIO.output(GREEN,GPIO.LOW)
return "BLUE"
@app.route('/clear')
def clear():
GPIO.output(BLUE,GPIO.LOW)
GPIO.output(RED,GPIO.LOW)
GPIO.output(GREEN,GPIO.LOW)
return "Cleared"
if __name__ == '__main__':
GPIO.setmode(GPIO.BCM)
GPIO.setup(BLUE,GPIO.OUT)
GPIO.setup(RED,GPIO.OUT)
GPIO.setup(GREEN,GPIO.OUT)
app.run(host='0.0.0.0',port=5000,debug=True)
GPIO.cleanup()

view raw

rbg.py

hosted with ❤ by GitHub

https://gist.github.com/Ghost-script/60226fa48e6b12af1fdee1aa9e70e7e5.js

You can check with the request and response structure you need from the Api.ai docs. Next, this application receives the calls from api.ai webhook and it triggers the targeted LED depending on the resolvedQuery. The above code was written so that I can test locally with get requests too. I used pagekite.net to tunnel and expose my flask application to the external world. Following is the circuit diagram for the connections.

circuit

Following is the Result,

Continue reading “Glow LEDs with Google Home”

Glow LEDs with Google Home

Learnings from BugYou

This is a long overdue post on a project called Bugyou, that recently went live to servers. It is a tool for reporting Autocloud image build test reports to different tracking services. Its split into two parts. Bugyou and Bugyou-Plugins. Bugyou is essentially a fedmsg consumer here that listens and filters autocloud messages and pushes them down a retask-queue.

On the other end of the queue there is a worker that filters and pushes the messages to different service plugins available. The plugin processes reads these messages and performs the necessary action.

A simple flow of action diagram for this project can be found here

We wanted bugyou-plugins to be able to update topics for bugyou while in execution, hence we came up with a instruction queue, that passes instruction from plugins to bugyou.  For accomplishing this I used multiprocessing module and for communication between the processes I used manager proxy dict object. A learning for me while working with manager dict objects is that, their values don’t get updated implicitly as they are proxy objects and they are not aware that their values have changed, hence you need to reassign them everytime.

 

from multiprocessing import Process,Manager
import time

def message_passer():
    manager = Manager()
    switch = manager.dict()
    switch.update({'lists' : [1,2,3,4]})
    proc = Process(target=keep_printing,args=(switch,))
    proc.start()
    i=1
    while i!=0:
        i=input()
        l = switch['lists']
        l.append(i)
        switch['lists']=l #reassigning here, values wont be updated in switch otherwise
    proc.join()

def keep_printing(switch):
    while True:
        time.sleep(2)
        print switch

if __name__ == '__main__':
    message_passer()

 

Things to do:

Make bugyou capable of picking up lost messages from datanomer.

Learnings from BugYou

Dev-Sprint @ PyconIn 2015

With each passing day, we are getting closer to Pycon India 2015. Volunteers are at their heels to help make the event successful. My love for the language and being a part of the awesome python community, I  went ahead to volunteer for the Devsprints which has been introduced to Pyconindia this year.

Think of  Devsprint as having a good time, coding hands-on with your fellow Python
programmers. The atmosphere will be that of an intense one, extremely
focused on projects, with mentors hanging around to help you overcome
any roadblock that you might face. The usual outcome of these intense
sprints are patches, bug fixes and numerous upstream pull requests
from almost all the participants.

Now, that call for proposal window is closed and the Devsprint ideas have been finalized, we are busy engaging with project mentors to get more info about their proposal.

In the meantime interested participants are expected to read about the finalized proposals and complete the registration for attending the event using this link. Also registration to Devsprint is free of cost 😀 but you need to have a valid pyconindia ticket.

Oh! did i mention , i will also be mentoring two projects 😉 , Pagure and Anitya.

So hurry up and register if you haven’t registered yet! Last day for registration is 24th of September. We have limited seats and its first come first serve basis, So Get Set GO!

tous vous voir à PyConIndia(see you all in pyconindia)

Dev-Sprint @ PyconIn 2015

Event Report: FudConIn 2015

As promised I am back from fudcon india with loads of experience and new knowhow about various tools.

::DAY ONE::

The first day started with keynote talk by Dennis Gilmore on “Delivering Fedora for everything and everyone”. He discussed about the future plans for Fedora release engineering team. Next I moved to a different room where a talk on GlusterFS was being held by Vikhyat Umrao. This was the first time I heard about Gluster File System so was pretty much curious  to learn more about it. After that I attended back to back two more talks on GlusterFS, which are  “Geo-Replication and Disaster Recovery in GlusterFS” by Bipin Kunal and “Efficient data maintenance in GlusterFS using Databases” by Joseph Elwin Fernandes . I must add session by Joseph Elwin Fernandes was great, he is a really good speaker 🙂 . After lunch i attended Jared Smith’s talk on “Whats new in Drupal 8?”. I learned about new features that are going to be added to Drupal 8, like REST support, editing content directly, easy installation and language selection etc. The day ended with a keynote talk by Harish Pillay on how to evaluate open source projects and spoke about “open source prospector” a tool to track FOSS projects over the globe.

::DAY TWO::

Day two started with a keynote by Jiri Eishchmann on the future of fedora workstation. He spoke about better graphics support, better battery life in F23 also more reliable weekly updates. Then there was a session on Haskell by Jens Petersen. Next I attended Rejy Cyriac’s talk on Selinux. after which i too configured my selinux for the good 🙂 . Soon after that a interesting workshop was done by Mayur Patil on how compile linux kernel and wrote a small test kernel module. Day two ended by tenzin chokden with his keynote talk on how GhostNet affected the Tibetian Community and how Linux helped them stop GhostNet from spreading.

::DAY THREE::

Day three was mostly meeting people and attending fewer talks and workshops. Aditya Patawari, Lalatendu Mohanty took a workshop on Docker basics. Next Flask 101 workshop was taken by Sayan Chowdhury and Ratnadeep Debnath. Finally closing ceremony by Rupali Talwatkar after the lunch. Oh yes! also a photo session was there 😀 .

Will update with the link to the talks soon.

::ABOUT PUNE::

Pune is a great place! Had awesome food! Enjoyed alot! Also was almost lost on my way to the venue since there were two MITs and I reached the wrong one! Light drizzle on the day of return. Awesome experience altogether! Waiting for next Fudcon India 🙂 .

PS: Pics coming soon

Event Report: FudConIn 2015

My Final Year Project aka Online FIR/GD lodging system.

Hello all, this post will be a small one. I will be discussing about my College Final Year project in here. The topic for my project was Online FIR/GD Lodging System. Since I have worked with Flask before, I chose Django 1.8 to try something new. Learning the basics of Django wasn’t much difficult. Now my college stuff is over but i have decided to add on a few more features.

about THE Application

Its a simple web-app with a Form to fill up the details necessary for lodging an FIR/GD. The main problem with this kind of application is the identity of the user.So to properly identify i decided to use approved ID Proofs by the Government for different organizations i.e Voter ID Number, Aadhaar ID Number, Ration Card Number, PAN Card Number. Obviously this a prototype so I had the freedom to use dummy data for users Aadhaar, Voter and PAN card in database tables. For Practical implementation it surely needs access to the above mentioned databases from the government. So a user can choose any two ID proof out of the 4 and fill in the details. The two ID card data must have same name and birth date for the user to be identified. Next the Report(FIR/GD) form is to be filled by the person. Also currently he needs to explicitly select the Police Station he wants to lodge the report to. According to the city and state the police stations gets shortlisted. Next the form is submitted. I did not use User Auth in this application. We already have the user identity but we need ways to contact the user, hence phone number and email needs to be filled in the Form. Next comes the verification of mobile number. I really didn’t want to spend much on this so decided to use Site2SMS API which allows you to send limited Free SMS. So sending OTP via Site2SMS API. Since the OTP are short-lived (5 minutes) i decided not to use database to avoid overhead due to frequent writes and decided to use Redis instead. The TTL feature made my work of expiring OTPs easy. Finally sending a pdf document report via email. I used xhtml2pdf to generate the pdf’s and again saving money by using Gmail for sending out emails. 😉 That pretty much sums up the application. You can check out the SourceCode.

Future Plans

– Right now all I have done is for the user who wants to lodge an FIR/GD. The police portal exists but the Reports are not updated instantly. I am planning to use Django Signals to work with django-socketio to update the new Reports instantly just after they are inserted into database.

– Chalk out a way to automatically register the FIR/GD to the correct police station depending on the location of event

– This can be most handy when used as a mobile application. So either create an API and mobile application or make the design responsive to suit any device.

– And finally to write Unit tests. ( This is my new year resolution) 😛

P.S.- Suggest some cool name for this app.
P.P.S.- You are most welcome to request a feature or add a feature and send a PR on Github.

This year Fudcon is from 26th. Will be attending it and maybe meeting some of you. Next post will be on FUDCON 2015. Till then Happy Coding. 🙂

My Final Year Project aka Online FIR/GD lodging system.

Understanding RapidJson

With new technologies softwares need to evolve and adapt. My new task is to make cppagent generate output in Json (JavaScript Object Notation) format. Last week i spent sometime to try out different libraries and finally settled on using Rapidjson. Rapidjson is a json manipulation library  for c++ which is fast, simple and has compatibility with different c++ compilers in different platforms. In this post we will be looking at example codes to generate, parse and manipulate json data. For people who want to use this library i would highly recommend them to play with and understand the example codes first.

First we will write a simple program to write a sample json as below (the same simplewriter.cpp as in example) :

{
    "hello" : "world" ,
    "t" : true ,
    "f" : false ,
    "i" : 123 ,
    "pi" : 3.1416 ,
    "a": [
        0,
        1,
        2,
        3
    ]
}

To generate a Json output you need:

  • a StringBuffer object, a buffer object to write the Json output.
  • Writer object to write Json to the buffer. Here i have used PrettyWriter object to write human-readable and properly indented json output.
  • functions StartObject/EndObject to start and close a json object parenthesis “{” and  “}” respectively.
  • functions StartArray/EndArray to start and end a json array object i.e “[” and “]“.
  • functions String(), Uint(), Bool(), Null() , Double()  are called on writer object to write string, unsigned integer, boolean, null, floating point numbers respectively.
#include "rapidjson/stringbuffer.h"
#include "rapidjson/prettywriter.h"
#include <iostream>

using namespace rapidjson;
using namespace std;

template <typename Writer>
void display(Writer& writer );

int main() {
 StringBuffer s;
 PrettyWriter<StringBuffer> writer(s);
 display(writer);
 cout << s.GetString() << endl;   // GetString() stringify the Json
 }

template <typename Writer>
void display(Writer& writer){
 writer.StartObject();  // write "{"
 writer.String("hello"); // write string "hello"
 writer.String("world");
 writer.String("t");
 writer.Bool(true);   // write boolean value true
 writer.String("f");
 writer.Bool(false);
 writer.String("n");
 writer.Null();        // write null
 writer.String("i");
 writer.Uint(123);     // write unsigned integer value
 writer.String("pi");
 writer.Double(3.1416); // write floating point numbers
 writer.String("a");
 writer.StartArray();  // write "["
 for (unsigned i = 0; i < 4; i++)
 writer.Uint(i);
 writer.EndArray();   // End Array "]"
 writer.EndObject();  // end Object "}"
}

Next we will manipulate the Json document and change the value for key “Hello” to “C++” ,

To manipulate:

  • first you need to parse your json data into a Document object.
  • Next you may use a Value reference to the value of the desired node/key or you can directly access them as doc_object[‘key’] .
  • Finally you need to call the Accept method passing the Writer object to write the document to the StringBuffer object.

Below function changes the keywords for “hello” , “t”, “f” to “c++” , false , true respectively.


template <typename Document>
void changeDom(Document& d){
// any of methods shown below can be used to change the document
Value& node = d["hello"];  // using a reference
node.SetString("c++"); // call SetString() on the reference
d["f"] = true; // access directly and change
d["t"].SetBool(false); // best way
}

Now to put it all together:

Before Manupulation
{
     "hello": "world",
     "t": true,
     "f": false,
     "n": null,
     "i": 123,
     "pi": 3.1416,
     "a": [
        0,
        1,
        2,
        3
     ]
}
After Manupulation
{
     "hello": "c++",
     "t": false,
     "f": true,
     "n": null,
     "i": 123,
     "pi": 3.1416,
     "a": [
        0,
        1,
        2,
        3
      ]
}

The final code to display the above output:


#include "rapidjson/stringbuffer.h"
#include "rapidjson/prettywriter.h"
#include "rapidjson/document.h"
#include <iostream>

using namespace rapidjson;
using namespace std;

template <typename Writer>
void display(Writer& writer);

template <typename Document>
void changeDom(Document& d);

int main() {
 StringBuffer s;
 Document d;
 PrettyWriter<StringBuffer> writer(s);
 display(writer);
 cout << "Before Manupulation\n" << s.GetString() << endl ;
 d.Parse(s.GetString());
 changeDom(d);
 s.Clear();   // clear the buffer to prepare for a new json document
 writer.Reset(s);  // resetting writer for a fresh json doc
 d.Accept(writer); // writing parsed document to buffer
 cout << "After Manupulation\n" << s.GetString() << endl;
 }

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
d["f"] = true;
d["t"].SetBool(false);
}

template <typename Writer>
void display(Writer& writer){
 writer.StartObject();
 writer.String("hello");
 writer.String("world");
 writer.String("t");
 writer.Bool(true);
 writer.String("f");
 writer.Bool(false);
 writer.String("n");
 writer.Null();
 writer.String("i");
 writer.Uint(123);
 writer.String("pi");
 writer.Double(3.1416);
 writer.String("a");
 writer.StartArray();
 for (unsigned i = 0; i < 4; i++)
 writer.Uint(i);
 writer.EndArray();
 writer.EndObject();
}

[EDIT]

Added more complex examples in Understanding RapidJson – Part 2

Understanding RapidJson

Finally integrating Gcov and Lcov tool into Cppagent build process

This is most probably my final task on Implementing Code Coverage Analysis for Mtconnect Cppagent. In my last post i showed you the how the executable files are generated using Makefiles. In Cppagent the Makefiles are actually autogenerated by a cross-platform Makefile generator tool CMakeTo integrate Gcov and Lcov into the build system we actually need to start from the very beginning of the process which is cmake. The CMake commands are written in CmakeLists.txt files. A minimal cmake file could look something like this. Here we have the test_srcs as the source file and agent_test as the executable.


cmake_minimum_required (VERSION 2.6)

project(test)

set(test_srcs menu.cpp)

add_executable(agent_test ${test_srcs})

Now lets expand and understand the CMakeLists.txt for cppagent.

set(CMAKE_MODULE_PATH &quot;${CMAKE_CURRENT_SOURCE_DIR}/../agent/CMake;${CMAKE_MODULE_PATH}&quot;) 

This sets the path where cmake should look for files when files or include_directories command is used. The set command is used to set values to the variables. You can print all the available variable out using the following code.

get_cmake_property(_variableNames VARIABLES)
foreach (_variableName ${_variableNames})
    message(STATUS &quot;${_variableName}=${${_variableName}}&quot;)
endforeach()

source: stackoverflow.com

Next section of the file:

if(WIN32)
 set(LibXML2_INCLUDE_DIRS ../win32/libxml2-2.9/include )
 
 if(CMAKE_CL_64)
 set(bits 64)
 else(CMAKE_CL_64)
 set(bits 32)
 endif(CMAKE_CL_64)
 
 file(GLOB LibXML2_LIBRARIES "../win32/libxml2-2.9/lib/libxml2_a_v120_${bits}.lib")
 file(GLOB LibXML2_DEBUG_LIBRARIES ../win32/libxml2-2.9/lib/libxml2d_a_v120_${bits}.lib)
 set(CPPUNIT_INCLUDE_DIR ../win32/cppunit-1.12.1/include)
 file(GLOB CPPUNIT_LIBRARY ../win32/cppunit-1.12.1/lib/cppunitd_v120_a.lib)
endif(WIN32)

Here, we are checking the platform we are working on and accordingly the library variables are being set to the windows based libraries. We will discuss the file command later.

if(UNIX)
 execute_process(COMMAND uname OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE CMAKE_SYSTEM_NAME)
 if(CMAKE_SYSTEM_NAME MATCHES Linux)
 set(LINUX_LIBRARIES pthread)
 endif(CMAKE_SYSTEM_NAME MATCHES Linux)
endif(UNIX)

Next if the OS platform is Unix based then we execute the command uname as child-process and store the output in CMAKE_SYSTEM_NAME variable. If its a Linux environment., Linux  will be stored in the CMAKE_SYSTEM_NAME variable, hence,  we set the variable LINUX_LIBRARIES to pthread(which is the threading library for linux). Now we find something similar we did in our test CMakeLists.txt. The project command sets the project name, version etc. The next line stores the source file paths to a variable test_src

set( test_srcs file1 file2 ...)
Now we will discuss about the next few lines.
file(GLOB test_headers *.hpp ../agent/*.hpp)

The file command is used to manipulate the files. You can read, write, append files, also GLOB allows globbing of files which is used to generate a list of files matching the expression you give. So here wildcard expression is used to generate a list of all header files in the particular folder *.hpp.

include_directories(../lib ../agent .)

This command basically tells cmake to add the directories specified by it to its list of directories when looking for a file.

find_package(CppUnit REQUIRED)

This command looks for package and loads the settings from it. REQUIRED makes sure the External package is loaded properly else it must stop throwing an error.

add_definitions(-DDLIB_NO_GUI_SUPPORT ${LibXML2_DEFINITIONS})

add_definitions is where the additional compile time flags are added.

add_executable(agent_test ${test_srcs} ${test_headers})

This line generates an executable target for the project named agent_test and test_src and test_headers are its source and header files respectively. 

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES})

This line links the executable its libraries.

::Gcov & Lcov Integration::

Now that we know our CMake file well, lets make the necessary changes.

Step #1

Add two variables and set the appropriate compile and linking flags for gcov and lcov respectively.

set(GCOV_COMPILE_FLAGS &quot;-fprofile-arcs -ftest-coverage&quot;)
set(GCOV_LINK_FLAGS &quot;-lgcov&quot;)

Step #2

Split the source into two halves one being the unit test source files and the other being the cppagent source files. We are not interested in unit test files’ code coverage.

set( test_srcs test.cpp
 adapter_test.cpp
 agent_test.cpp
 checkpoint_test.cpp
 config_test.cpp
 component_test.cpp
 component_event_test.cpp
 connector_test.cpp
 data_item_test.cpp
 device_test.cpp
 globals_test.cpp
 xml_parser_test.cpp
 test_globals.cpp
 xml_printer_test.cpp
 asset_test.cpp
 change_observer_test.cpp
 cutting_tool_test.cpp
 )
set(agent_srcs ../agent/adapter.cpp 
 ../agent/agent.cpp 
 ../agent/checkpoint.cpp
 ../agent/component.cpp 
 ../agent/component_event.cpp 
 ../agent/change_observer.cpp
 ../agent/connector.cpp
 ../agent/cutting_tool.cpp
 ../agent/data_item.cpp 
 ../agent/device.cpp 
 ../agent/globals.cpp 
 ../agent/options.cpp
 ../agent/xml_parser.cpp 
 ../agent/xml_printer.cpp
 ../agent/config.cpp
 ../agent/service.cpp
 ../agent/ref_counted.cpp
 ../agent/asset.cpp
 ../agent/version.cpp
 ../agent/rolling_file_logger.cpp
 )

Step #3

Like i told in Step 2 we are not interested in unit test source files. So here we just add the Gcov compile flags to only the cppagent source files. So .gcno files of only the agent source files are generated.

set_property(SOURCE ${agent_srcs} APPEND PROPERTY COMPILE_FLAGS ${GCOV_COMPILE_FLAGS})

Step #4

Now we also know that for coverage analysis we need to link the “lgcov” library. Therefore, we do this in the following way.

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES} ${GCOV_LINK_FLAGS}) 

Step #5

Since we love things to be automated. I added a target for the make command to automate the whole process of running test and copying the “.gcno” files and moving the “.gcda” files to a folder then running the lcov command to read the files and prepare a easily readable statistics and finally the genhtml command to generate the html output. add_custom_target allows you to add custom target for make(Here i added “cov” as the target name). COMMAND allows you to specify simple bash commands.

add_custom_target( cov
COMMAND [ -d Coverage ]&amp;&amp;rm -rf Coverage/||echo &quot;No folder&quot;
COMMAND mkdir Coverage
COMMAND agent_test
COMMAND cp CMakeFiles/agent_test.dir/__/agent/*.gcno Coverage/
COMMAND mv CMakeFiles/agent_test.dir/__/agent/*.gcda Coverage/
COMMAND cd Coverage&amp;&amp;lcov -t &quot;result&quot; -o cppagent_coverage.info -c -d .
COMMAND cd Coverage&amp;&amp;genhtml -o coverage cppagent_coverage.info
COMMENT &quot;Generated Coverage Report Successfully!&quot;
)

::Conclusion::

Now to build test and generate report.

Step #1 cmake .    // In project root which cppagent/
Step #2 cd test    // since we want to build only test
Step #3 make       // This will build the agent_test executable.
Step #4 make cov   // Runs test, Copies all files to Coverage folder, generates report.

So, we just need to open the Coverage/coverage/index.html to view the analysis report. Final file will look something like this.

Finally integrating Gcov and Lcov tool into Cppagent build process

Using Gcov and Lcov to generate Test Coverage Stats for Cppagent

In my last post we generated Code coverage statistics for a sample c++. In this post i will be using gcov & lcov to generate similar code coverage for tests in cppagent. To use gcov we first need to compile the source files with --coverage flag. Our sample c++ program was a single file so it was easy to compile, but for cppagent they use makefiles to build the project. Hence, i started with the Makefile looking for the build instructions.

If my previous posts i discussed the steps for building the agent_test executable, which starts by running make command in test folder. So i started tracing the build steps from the Makefile in test folder. Since we run make without any parameters, the default target is going to be executed.

The first few lines of the file were as below.

# Default target executed when no arguments are given to make.

default_target: all

.PHONY : default_target

These lines specifies that the default_target for this build is all. On moving down the file we see the rules for all.

# The main all target

all: cmake_check_build_system

cd /home/subho/work/github/cppagent_new/cppagent && $(CMAKE_COMMAND) -E cmake_progress_start /home/subho/work/github/cppagent_new/cppagent/CMakeFiles /home/subho/work/github/cppagent_new/cppagent/test/CMakeFiles/progress.marks

cd /home/subho/work/github/cppagent_new/cppagent && $(MAKE) -f CMakeFiles/Makefile2 test/all

$(CMAKE_COMMAND) -E cmake_progress_start /home/subho/work/github/cppagent_new/cppagent/CMakeFiles 0

.PHONY : all

So here in the line

cd /home/subho/work/github/cppagent_new/cppagent && $(MAKE) -f CMakeFiles/Makefile2 test/all

We can see Makefile2 is invoked with target test/all.

In Makefile2 towards the end of the file we can see the test/all target build instructions as,

# Directory level rules for directory test

# Convenience name for "all" pass in the directory.

test/all: test/CMakeFiles/agent_test.dir/all

.PHONY : test/all

The rule says to run the commands defined under target test/CMakeFiles/agent_test.dir/all. These commands are:

test/CMakeFiles/agent_test.dir/all:

$(MAKE) -f test/CMakeFiles/agent_test.dir/build.make test/CMakeFiles/agent_test.dir/depend

$(MAKE) -f test/CMakeFiles/agent_test.dir/build.make test/CMakeFiles/agent_test.dir/build

$(CMAKE_COMMAND) -E cmake_progress_report /home/subho/work/github/cppagent_new/cppagent/CMakeFiles 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58

@echo "Built target agent_test"

.PHONY : test/CMakeFiles/agent_test.dir/all

The first two lines run the build.make file with target ‘test/CMakeFiles/agent_test.dir/depend‘ and ‘test/CMakeFiles/agent_test.dir/build‘ . The build.make contains all the compile instructions for each of the c++ files. This file is in ‘test/CMakeFiles/agent_test.dir’ folder along with flag.make , link.txt etc files. The  flag.make file contains all the compile flags and the ‘link.txt‘ contains the libraries flag needed by linker. On adding the --coverage flag to these files we can make the c++ source files compile with gcov linked hence .gcno files are generated when the make command is run.

After that we need to run the agent_test as usual. This will create the data files .gcda files. After that we need to gather the .gcda and .gcno files together and run the lcov and genhtml commands and then the html output will be obtained.

This slideshow requires JavaScript.

Using Gcov and Lcov to generate Test Coverage Stats for Cppagent