Finally integrating Gcov and Lcov tool into Cppagent build process

This is most probably my final task on Implementing Code Coverage Analysis for Mtconnect Cppagent. In my last post i showed you the how the executable files are generated using Makefiles. In Cppagent the Makefiles are actually autogenerated by a cross-platform Makefile generator tool CMakeTo integrate Gcov and Lcov into the build system we actually need to start from the very beginning of the process which is cmake. The CMake commands are written in CmakeLists.txt files. A minimal cmake file could look something like this. Here we have the test_srcs as the source file and agent_test as the executable.


cmake_minimum_required (VERSION 2.6)

project(test)

set(test_srcs menu.cpp)

add_executable(agent_test ${test_srcs})

Now lets expand and understand the CMakeLists.txt for cppagent.

set(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/../agent/CMake;${CMAKE_MODULE_PATH}") 

This sets the path where cmake should look for files when files or include_directories command is used. The set command is used to set values to the variables. You can print all the available variable out using the following code.

get_cmake_property(_variableNames VARIABLES)
foreach (_variableName ${_variableNames})
    message(STATUS "${_variableName}=${${_variableName}}")
endforeach()

source: stackoverflow.com

Next section of the file:

if(WIN32)
 set(LibXML2_INCLUDE_DIRS ../win32/libxml2-2.9/include )
 
 if(CMAKE_CL_64)
 set(bits 64)
 else(CMAKE_CL_64)
 set(bits 32)
 endif(CMAKE_CL_64)
 
 file(GLOB LibXML2_LIBRARIES "../win32/libxml2-2.9/lib/libxml2_a_v120_${bits}.lib")
 file(GLOB LibXML2_DEBUG_LIBRARIES ../win32/libxml2-2.9/lib/libxml2d_a_v120_${bits}.lib)
 set(CPPUNIT_INCLUDE_DIR ../win32/cppunit-1.12.1/include)
 file(GLOB CPPUNIT_LIBRARY ../win32/cppunit-1.12.1/lib/cppunitd_v120_a.lib)
endif(WIN32)

Here, we are checking the platform we are working on and accordingly the library variables are being set to the windows based libraries. We will discuss the file command later.

if(UNIX)
 execute_process(COMMAND uname OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE CMAKE_SYSTEM_NAME)
 if(CMAKE_SYSTEM_NAME MATCHES Linux)
 set(LINUX_LIBRARIES pthread)
 endif(CMAKE_SYSTEM_NAME MATCHES Linux)
endif(UNIX)

Next if the OS platform is Unix based then we execute the command uname as child-process and store the output in CMAKE_SYSTEM_NAME variable. If its a Linux environment., Linux  will be stored in the CMAKE_SYSTEM_NAME variable, hence,  we set the variable LINUX_LIBRARIES to pthread(which is the threading library for linux). Now we find something similar we did in our test CMakeLists.txt. The project command sets the project name, version etc. The next line stores the source file paths to a variable test_src

set( test_srcs file1 file2 ...)
Now we will discuss about the next few lines.
file(GLOB test_headers *.hpp ../agent/*.hpp)

The file command is used to manipulate the files. You can read, write, append files, also GLOB allows globbing of files which is used to generate a list of files matching the expression you give. So here wildcard expression is used to generate a list of all header files in the particular folder *.hpp.

include_directories(../lib ../agent .)

This command basically tells cmake to add the directories specified by it to its list of directories when looking for a file.

find_package(CppUnit REQUIRED)

This command looks for package and loads the settings from it. REQUIRED makes sure the External package is loaded properly else it must stop throwing an error.

add_definitions(-DDLIB_NO_GUI_SUPPORT ${LibXML2_DEFINITIONS})

add_definitions is where the additional compile time flags are added.

add_executable(agent_test ${test_srcs} ${test_headers})

This line generates an executable target for the project named agent_test and test_src and test_headers are its source and header files respectively. 

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES})

This line links the executable its libraries.

::Gcov & Lcov Integration::

Now that we know our CMake file well, lets make the necessary changes.

Step #1

Add two variables and set the appropriate compile and linking flags for gcov and lcov respectively.

set(GCOV_COMPILE_FLAGS "-fprofile-arcs -ftest-coverage")
set(GCOV_LINK_FLAGS "-lgcov")

Step #2

Split the source into two halves one being the unit test source files and the other being the cppagent source files. We are not interested in unit test files’ code coverage.

set( test_srcs test.cpp
 adapter_test.cpp
 agent_test.cpp
 checkpoint_test.cpp
 config_test.cpp
 component_test.cpp
 component_event_test.cpp
 connector_test.cpp
 data_item_test.cpp
 device_test.cpp
 globals_test.cpp
 xml_parser_test.cpp
 test_globals.cpp
 xml_printer_test.cpp
 asset_test.cpp
 change_observer_test.cpp
 cutting_tool_test.cpp
 )
set(agent_srcs ../agent/adapter.cpp 
 ../agent/agent.cpp 
 ../agent/checkpoint.cpp
 ../agent/component.cpp 
 ../agent/component_event.cpp 
 ../agent/change_observer.cpp
 ../agent/connector.cpp
 ../agent/cutting_tool.cpp
 ../agent/data_item.cpp 
 ../agent/device.cpp 
 ../agent/globals.cpp 
 ../agent/options.cpp
 ../agent/xml_parser.cpp 
 ../agent/xml_printer.cpp
 ../agent/config.cpp
 ../agent/service.cpp
 ../agent/ref_counted.cpp
 ../agent/asset.cpp
 ../agent/version.cpp
 ../agent/rolling_file_logger.cpp
 )

Step #3

Like i told in Step 2 we are not interested in unit test source files. So here we just add the Gcov compile flags to only the cppagent source files. So .gcno files of only the agent source files are generated.

set_property(SOURCE ${agent_srcs} APPEND PROPERTY COMPILE_FLAGS ${GCOV_COMPILE_FLAGS})

Step #4

Now we also know that for coverage analysis we need to link the “lgcov” library. Therefore, we do this in the following way.

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES} ${GCOV_LINK_FLAGS}) 

Step #5

Since we love things to be automated. I added a target for the make command to automate the whole process of running test and copying the “.gcno” files and moving the “.gcda” files to a folder then running the lcov command to read the files and prepare a easily readable statistics and finally the genhtml command to generate the html output. add_custom_target allows you to add custom target for make(Here i added “cov” as the target name). COMMAND allows you to specify simple bash commands.

add_custom_target( cov
COMMAND [ -d Coverage ]&&rm -rf Coverage/||echo "No folder"
COMMAND mkdir Coverage
COMMAND agent_test
COMMAND cp CMakeFiles/agent_test.dir/__/agent/*.gcno Coverage/
COMMAND mv CMakeFiles/agent_test.dir/__/agent/*.gcda Coverage/
COMMAND cd Coverage&&lcov -t "result" -o cppagent_coverage.info -c -d .
COMMAND cd Coverage&&genhtml -o coverage cppagent_coverage.info
COMMENT "Generated Coverage Report Successfully!"
)

::Conclusion::

Now to build test and generate report.

Step #1 cmake .    // In project root which cppagent/
Step #2 cd test    // since we want to build only test
Step #3 make       // This will build the agent_test executable.
Step #4 make cov   // Runs test, Copies all files to Coverage folder, generates report.

So, we just need to open the Coverage/coverage/index.html to view the analysis report. Final file will look something like this.

Finally integrating Gcov and Lcov tool into Cppagent build process

Using Gcov and Lcov to generate Test Coverage Stats for Cppagent

In my last post we generated Code coverage statistics for a sample c++. In this post i will be using gcov & lcov to generate similar code coverage for tests in cppagent. To use gcov we first need to compile the source files with --coverage flag. Our sample c++ program was a single file so it was easy to compile, but for cppagent they use makefiles to build the project. Hence, i started with the Makefile looking for the build instructions.

If my previous posts i discussed the steps for building the agent_test executable, which starts by running make command in test folder. So i started tracing the build steps from the Makefile in test folder. Since we run make without any parameters, the default target is going to be executed.

The first few lines of the file were as below.

# Default target executed when no arguments are given to make.

default_target: all

.PHONY : default_target

These lines specifies that the default_target for this build is all. On moving down the file we see the rules for all.

# The main all target

all: cmake_check_build_system

cd /home/subho/work/github/cppagent_new/cppagent && $(CMAKE_COMMAND) -E cmake_progress_start /home/subho/work/github/cppagent_new/cppagent/CMakeFiles /home/subho/work/github/cppagent_new/cppagent/test/CMakeFiles/progress.marks

cd /home/subho/work/github/cppagent_new/cppagent && $(MAKE) -f CMakeFiles/Makefile2 test/all

$(CMAKE_COMMAND) -E cmake_progress_start /home/subho/work/github/cppagent_new/cppagent/CMakeFiles 0

.PHONY : all

So here in the line

cd /home/subho/work/github/cppagent_new/cppagent && $(MAKE) -f CMakeFiles/Makefile2 test/all

We can see Makefile2 is invoked with target test/all.

In Makefile2 towards the end of the file we can see the test/all target build instructions as,

# Directory level rules for directory test

# Convenience name for "all" pass in the directory.

test/all: test/CMakeFiles/agent_test.dir/all

.PHONY : test/all

The rule says to run the commands defined under target test/CMakeFiles/agent_test.dir/all. These commands are:

test/CMakeFiles/agent_test.dir/all:

$(MAKE) -f test/CMakeFiles/agent_test.dir/build.make test/CMakeFiles/agent_test.dir/depend

$(MAKE) -f test/CMakeFiles/agent_test.dir/build.make test/CMakeFiles/agent_test.dir/build

$(CMAKE_COMMAND) -E cmake_progress_report /home/subho/work/github/cppagent_new/cppagent/CMakeFiles 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58

@echo "Built target agent_test"

.PHONY : test/CMakeFiles/agent_test.dir/all

The first two lines run the build.make file with target ‘test/CMakeFiles/agent_test.dir/depend‘ and ‘test/CMakeFiles/agent_test.dir/build‘ . The build.make contains all the compile instructions for each of the c++ files. This file is in ‘test/CMakeFiles/agent_test.dir’ folder along with flag.make , link.txt etc files. The  flag.make file contains all the compile flags and the ‘link.txt‘ contains the libraries flag needed by linker. On adding the --coverage flag to these files we can make the c++ source files compile with gcov linked hence .gcno files are generated when the make command is run.

After that we need to run the agent_test as usual. This will create the data files .gcda files. After that we need to gather the .gcda and .gcno files together and run the lcov and genhtml commands and then the html output will be obtained.

This slideshow requires JavaScript.

Using Gcov and Lcov to generate Test Coverage Stats for Cppagent

Using Gcov and Lcov to generate beautiful C++ code coverage statistics

We all know, testing is an important part of a project. But how efficient are your tests? How much of your codes have you tested? Here comes the role of code coverage tools. I recently got to work on a C++ project, and a code coverage tool (gcov and lcov) .

In this post i have taken a sample C++ program and will be generating the code coverage stats for the same using gcov and lcov. Here is my sample C++ program link. Its pretty simple menu driven program that does simple mathematical operations like addition, subtraction, multiplication and division depending on users choice.

In this demo i am not writing actual test-cases for the code but you can see the changes in the coverage graphs depending upon your choice.

To start with we need to install gcov. Gcov comes with gcc compiler. So if you have gcc installed then gcov will work for you. Next you need to have lcov. I am working on Fedora 21, so for me its a yum install.

$yum install lcov

Next lets start with compiling our code. Here my source file name is menu.cpp

$g++ -o menu.out --coverage menu.cpp

The –coverage option here is used to compile and link code needed for coverage analysis. You will find a menu.gcno file in the folder. Next we need to export two variables namely GCOV_PREFIX and GCOV_PREFIX_STRIP. Set GCOV_PREFIX to the folder you want the output files to be in.

$ls
menu.cpp  menu.out  menu.gcno  data    // you can see the new file menu.gcno

For me , the project is in  “/home/subho/work/lab/zzz/” and  inside this i have created a folder named data where i want the data files or .gcda file to be generated. so i set my GCOV_PREFIX to “/home/subho/work/lab/zzz/data” and the GCOV_PREFIX_STRIP equal to the the number of forward slashes or “/” in the path.

$export GCOV_PREFIX="/home/subho/work/lab/zzz/data"
$export GCOV_PREFIX_STRIP=6

now lets simply run the code.

$./menu.out

MENU
1: Add
2: Subtract
3: Multiply
4: Divide
5: Exit
Enter your choice :2
Enter two numbers: 3 4
Difference -1
MENU
1: Add
2: Subtract
3: Multiply
4: Divide
5: Exit
Enter your choice :5

Now we can see a menu.gcda file in data folder. Copy the .gcno file generated earlier to the data folder.


$cd data

$ls

menu.gcda

$cp ../menu.gcno .

$ls

menu.gcda  menu.gcno

Now that we have all the necessary files lets use lcov to read the coverage output file generated by gcov.

$lcov -t "result" -o ex_test.info -c -d .

Here ex_test.info is my output file.

-t     sets a test name

-o    to specify the output file

-c    to capture the coverage data

-d    to specify the directory where the data files needs to be searched

Now we will generate out html output for the statistics.

$genhtml -o res ex_test.info 

-o    To specify the output folder name.

Now on doing ls, you can see a folder named “res“.

$ls
ex_test.info   menu.gcda   menu.gcno   res

Now its time to enjoy the fruits of your labor 😛 . Go into the res folder and start a server or you can simply open the index.html file in your web-browser.

$cd res
$python -m "SimpleHTTPServer"     //to start a web-server  or
$firefox index.html               //to open the index.html directly using firefox browser

Now we can click on the links to check the code coverage stats. The Red lines are the ones not executed or uncovered region. The blue lines are the ones covered. Also you can look at the Line data section for the number of times the lines have been executed. 
You can look at these files in GitHub.

This slideshow requires JavaScript.

Using Gcov and Lcov to generate beautiful C++ code coverage statistics

Setting up MTConnect C++ Agent

In this post i’ll be discussing on how to setup cppagent (MTConnect c++ Agent) and run tests on it.

To start with we first need to git clone the repository from Github from here.

The current cloned version hash is 6d57d38cffff4b368f3ec003c2d8868d4f41a988.

Once you have cloned the repo enter the root folder of the repository. Now lets first build MTConnect.

For this we will create a folder named build in the root folder.

$ cd cppagent
$ mkdir build
$ cd build
$ cmake ..
$ make

After the make process is complete we will see MTConnect c++ agent in action. For this we will need to run the simulator.

We need to copy certain files to successfully run it. From the build/agent folder, copy VMC-3Axis.xml from the simulator folder into the current folder.

$cd agent
$ cp ../../simulator/VMC-3Axis.xml .

Now copy the agent configuration file

 $ cp ../../agent/agent.cfg .

Next edit the copied agent.cfg file and make the following changes to it:

Devices = VMC-3Axis.xml
 Host = 127.0.0.1

Open three terminals. In one of the terminals, start the ‘agent’

$ ./agent

Expected o/p:

MTConnect Agent Version 1.3.0.7 - built on Sun Oct 12 22:20:32 2014

In the second terminal run the adapter simulator. For that you need to go inside the simulator folder in the repository root directory then type the following command.

$ ruby run_scenario.rb -l -p 7878 --scenario -v simple_scenario_1.txt

Expected o/p:

run_scenario.rb:41: warning: toplevel constant String referenced by OptionParser::String
Waiting on 0.0.0.0 7878
Client connected
Received * PING, responding with pong
2014-10-26T18:14:04.512751|execution|INTERRUPTED
2014-10-26T18:14:06.513296|tool_id|1
2014-10-26T18:14:08.513635|execution|ACTIVE
2014-10-26T18:14:10.514086|execution|READY
2014-10-26T18:14:12.514512|program|Tap|execution|READY
Received * PING, responding with pong
2014-10-26T18:14:14.514799|tool_id|2
2014-10-26T18:14:16.515056|execution|ACTIVE
2014-10-26T18:14:18.515424|execution|READY
2014-10-26T18:14:20.515717|tool_id|3
2014-10-26T18:14:22.516117|program|Countersink|execution|ACTIVE

In the third terminal type the following:

$ curl localhost:5000/current

This will give a XML output every time. Each XML output is different. You can check that by piping the output to a file  and then doing a diff of the two files

$ curl localhost:5000/current > 1.xml
$ curl localhost:5000/current > 2.xml
$ diff 1.xml 2.xml

The output will be something like:

4c4
< <Header creationTime="2014-10-26T18:19:10Z" sender="localhost.localdomain" instanceId="1414347216" version="1.3.0.7" bufferSize="131072" nextSequence="227" firstSequence="1" lastSequence="226"/>
---
> <Header creationTime="2014-10-26T18:19:16Z" sender="localhost.localdomain" instanceId="1414347216" version="1.3.0.7" bufferSize="131072" nextSequence="229" firstSequence="1" lastSequence="228"/>
70,71c70,71
< <Execution dataItemId="cn6" timestamp="2014-10-26T18:19:10.560074" name="execution" sequence="226">ACTIVE</Execution>
< <ToolId dataItemId="cnt1" timestamp="2014-10-26T18:19:08.559795" name="tool_id" sequence="224">3</ToolId>
---
> <Execution dataItemId="cn6" timestamp="2014-10-26T18:19:14.560666" name="execution" sequence="228">READY</Execution>
> <ToolId dataItemId="cnt1" timestamp="2014-10-26T18:19:12.560338" name="tool_id" sequence="227">2</ToolId>

Great!! now we have a working version of cppagent.

Next we will build tests. Follow the steps below to build test. We assume we are outside the repository root. So we need to enter the root first.

$ cd cppagent  
$ cmake .

Now we will enter the test directory in the root folder and build tests.

$ cd test  
$ make

Now to run the tests, run the agent in one terminal and the following command in another.

$ ./agent_test

Expected Output:

Continue reading “Setting up MTConnect C++ Agent”

Setting up MTConnect C++ Agent

Setting up a web-server for flask-app deployment in mod_wsgi :: Part-2 ::

Before we start I would assume we are ready with our cloud instance and are able to connect to it via ssh as shown in Part-1 of this post. I expect that you have your flask application ready already. Lets start with setting up the web-server without anymore delay. We can deploy our application in many ways, but we will be focussing on mod_wsgi in this post. First we need to install some of the basic packages needed for setting up a web-server.

sudo apt-get install apache2 libapache2-mod-wsgi #For Debian/Ubuntu:

sudo yum install mod_wsgi                        #For Rpm based OS

You can test if things are working and server is up, just find out your public IP from amazon ec2 console, and type it in your browser. This should show the default pages. Next we need to get our flask app into the instance. For this i used github as remote repository. You need to install Git and then Git clone your repo into the user home. Now setup virtualenv and install the dependencies(installing dependencies into virtual environment is a good practice) :

sudo apt-get install python-pip 
or                                  #depending on the OS
sudo yum install python-pip 
sudo pip install virtualenv

Now that you have your virtualenv installed we will now create our virtual environment

virtualenv 
eg: virtualenv env
source /bin/activate #to activate the virtualenv

you can singly install dependencies as

pip install <package-name>

Or install from a requirements.txt file as

pip install -r requirements.txt

Now copy your whole project along with virtual environment to ‘/var/www/’ folder.

sudo cp /current/path/app-root /var/www/ -r

:: Adding your .wsgi file::

Now that you are ready lets add a new file to our app root called the named ‘youappname.wsgi’ Having the content

from yourapplication import app as application //structure this line such that you must be 
                                               //able to import app from your flask-app as 
                                               //application

To run our app we need to specify the environment path.

Method 1:
Adding the following two line to the top of your .wsgi file.
activate_this = '/var/www/project-root/your_virtualenv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

Method 2:  This method will be shown in the next topic

::Writing config file for apache::

This took me some while to figure out how and what to write in this file. You need to create a configuration file for apache.

sudo vi /etc/apache2/sites-enabled/000-default.conf    // for Ubuntu
sudo vi /etc/httpd/conf.d/wsgi.conf  //for RHELorAmazon Linux

Now add the following line to the file. Method 2 for adding virtual environment path is shown below.

WSGISocketPrefix /var/run/wsgi        // to avoid permission denied error in RHEL or Amazon 
                                      // Linux , not necessary in ubuntu

ServerName yourservername.com   
WSGIDaemonProcess yourappname user=user1 group=group1 threads=5 python-path=/var/www/yourappname:/var/www/test_app/env/lib/python2.6/site-packages    //running as different user is 
                                                          //good for security reason
WSGIScriptAlias / /var/www/yourappname/yourappname.wsgi

           WSGIProcessGroup yourappname
           WSGIApplicationGroup %{GLOBAL}
           Order deny,allow
           Allow from all

You can check the link if you find things confusing. In mod_wsgi jinja2 won’t show you errors in browser even if you put ‘app.debug=True’ instead you can make your app show debug output and print statements in terminal. For this you need to add

WSGIRestrictStdout Off

in the config files. You need to open these config files using vi editor.

sudo vi /etc/apache2/apache.conf    // for Ubuntu
sudo vi /etc/httpd/conf/httpd.conf  //for RHELorAmazon Linux

Now ubuntu users need to enable mod wsgi and restart server.

sudo a2enmod wsgi 
sudo service apache2 restart

RHEL users just need to restart their server. To do so type

sudo service httpd restart //you may get a [FAIL] for stopping server, that's ok because you are running apache 1st time.

If things work fine well and good if not then you can have a look at the error logs.

sudo tail -f /var/log/apache2/* //for Ubuntu
sudo cat /etc/httpd/logs/error_log //for RHEL/Amazon Linux

Now if you look closely in the config file, we wrote user=user1 & group=group1. These needs to be created. For this: You can check the current owner and their permission using

ls -ld /var/www/site1/
drwxr-xr-x 2 root root 4096 Oct 10 11:21 site1/

Create the group first and then add the user to it.

sudo groupadd group1

Now adding the user to the new group

sudo useradd user1 -g group1

Also we need to add the apache user to this group but first we need to find it

finding the apache user,
ps aux | grep apache  //for Ubuntu it's generally www-data
ps aux | grep httpd   // for RHEL/Amazon Linux it's generally apache 
then,
sudo usermod -a -G group1 <user>  //user as found from the above commands

You can now verify if the user you created is in the new group

groups user1
user1 :  group1 //output

Now we will change the ownership of the flask-app folder to group1 so that it has full access only to that folder. This will let the app create new files or upload files into the server.

sudo chown -vR :group1 /var/www/site1/
changed ownership of `/var/www/site1/' from root:root to :group1
chown -> change ownership
-v    -> verbose it shows file names affected by the command
-R    -> recursively applies to all children
:group1 -> name of the new group1
sudo chmod -vR g+w /var/www/site1
mode of `/var/www/site1/' changed from 0755 (rwxr-xr-x) to 0775 (rwxrwxr-x)
chmod -> change mode bit
-v    -> verbose output
-R    -> Apply recursively
g+w   -> give write access to group

Now you can verify if group1 has been added

ls -ld /var/www/site1/
drwxrwxr-x 2 root group1 4096 Oct 10 11:21 /var/www/site1/

Restart your server and check . If you liked this blog give it a like and share it. Happy Coding. Continue reading “Setting up a web-server for flask-app deployment in mod_wsgi :: Part-2 ::”

Setting up a web-server for flask-app deployment in mod_wsgi :: Part-2 ::

Setting up a web-server for flask-app deployment in mod_wsgi :: Part-1 ::

Here i have discussed how to setup your cloud instance and connect to it. If you are already able to connect to your instance you can directly goto the Part-2 of this article.This week I had a tough time setting up my web-server for my flask-app. Firstly you need to get your server instance ready. I opted for AWS EC2 Instance for my app. you can look at these suggestions below for cloud server providers.

@ Amazon EC2

@ Crowncloud

@ DigitalOcean

How to launch EC2 Instance?

If you selected AWS as your cloud server provider then you can continue reading else for others if your instance is ready you can directly jump to Connecting to your Instance. For AWS you need to create an account and login, then from aws console goto EC2 dashboard and then “Launch Instance”. Select your Instance configuration and launch it.

Step 1:: Select you Instance Image, which in simple works is the OS you want to use. I used Ubuntu Server 14.04 LTS (HVM), SSD Volume Type 64-bit

Step 2:: Next choose the Instance Type depending on your needs. Step 3:: For now we are going to opt for the default settings.

Step 4:: Attach a volume to your Instance. Specify the required size and type of Volume you need.

Step 5:: You can Tag your instance for better identification of the instance.

Step 6:: You can choose a security group if group already exists or create a new group. To create a new group select the “create new” option give a proper name to your security group, then add rules to it. For now we will add two rules:

Rule1 : set type as "ssh" and source "my ip" or "custom ip" (recommended), selecting "anywhere" may be a security loophole.
Rule 2: set type as HTTP and source anywhere.

Step 7:: Review and launch the instance. You will be asked to create a key pair or choose an existing one. To create new enter the key-pair name and download. Note: Do remember the downloaded key file location we will be needing it later. Congrats Your Instance has been created!!!

How to connect to the instance?

If you are using windows in your local machine you can follow this link to connect to your instance using putty. You can get putty and puttygen from here. Linux users have ssh installed by default so I will be using ssh from my local fedora machine to connect to my ubuntu instance. Before you connect change the file permission of the key-file

$chmod 400 /path/to/my_key_file.pem

Now to connect to the instance:

$ssh -i path/to/your/key-file.pem <user>@<public dns>

eg:
$ssh -i ~/.ssh/awskey.pem ec2-user@ec2-54-183-159-198.us-west-1.compute.amazonaws.com

Now , the key-file.pem here is the file that was downloaded to your machine while creating a new key-pair. The user here is ec2-user or root for Red-Hat Linux , ec2-user for Amazon Linux and ubuntu for Ubuntu. You can get your Public DNS from your instance page as shown below.

Public DNS

If its the first time you are logging into the instance you will be asked if it can store the ECDSA fingerprint. Type in “yes” and it will be added to your list of known host. Next time you will be spared from these questions. SSH

In the next part we shall discuss how to deploy the application in server.

Note: For people with dynamic ip may have prolbem connecting to your instance if your ip gets changed (like mine). Solution is open EC2 dashoard -> Security Groups -> select the security group attached to your instance -> Click the “Inbound” tab below -> edit -> Change the source for ssh to my ip -> Save . Now try logging in from your terminal.

Setting up a web-server for flask-app deployment in mod_wsgi :: Part-1 ::

10 years of DGPLUG

We recently celebrated the 10 glorious years of DGPLUG. On this occasion we had a 5 day workshop (29th August to 2nd September) at NIT Durgapur.

::Day 1::

The first day was more or less introductory session where we discussed about what are our goals , our history and our programme for the future. The attendees constituted mainly of 1st,2nd and 3rd year engineering students of various colleges. The speaker for the session was Kushal Das, he told them about the Summer Training conducted by DPLUG for free every year. We told them about Open Source and how they can contribute to opensource projects. The next talk was by Praveen Kumar and was all about Fedora Project and how we can contribute to it. Next we had a really interesting talk by P.J.Prasad on Iptables. At the end we concluded the first day of the workshop with light discussions on various topics and also asked the students to get some packages installed for the next day.

::Day 2::

On the day two of the event the auditorium was teeming with eager faces , a workshop on Python was to be held. Kushal Das took the session on python along with the introduction of Vim. We had a tough time helping them to this new editor nevertheless we enjoyed doing it. Python for you and me book was followed in the session. We covered basic python commands , data structures and other basic functions of the language. By the end of the day we managed to write a program which could list down the files and folders of a directory quite similar to ls command of linux.

::Day 3::

On the day three we had workshops on flask, a web-framework based on python by Sayan Chowdhury in the first half of the day. Later In the second half we had workshop on unit test module of python by Ratnadeep Debnath where we learned to write test cases for our functions.

::Day 4::

On the Fourth day of the event we had a session on documenting our codes by Kushal Das. We introduced reStructuredText to write our documentation and then use rst2s5 to convert them to presentations. After that we used a powerful python package sphinx and prepared a demo documentation.

::Day 5::

The day five was all about how to contribute to upstream projects, Learn using Git, making patches, and other important git features and commands. After that we had a general discussion session followed by feedback session. We got many positive feedbacks and suggestions for improvement which has been noted down and will be kept in mind in the upcoming events.

Finally we talked about various projects in hand, and newer project ideas to be developed. Hoping to meet these awesome people soon at Pycon India. Till then Keep Coding! 😉

10 years of DGPLUG