Category: Web Development

Category added for Website – 2017

A Beginner’s Guide to Angular CLI

A Beginner's Guide to Angular CLI

What is Angular CLI?

For the uninitiated, Angular CLI is a Command Line Interface to build Angular apps quickly adhering to the best practices. Angular CLI has multiple advantages, some which are listed below for quick reference.
In this blog post, we’ll dwell on why we need Angular CLI, how to install it, how to use it i.e. the various commands that it offers to make our life easy. We’ll also look at how we can generate production ready builds using the angular CLI.

Why do we need Angular CLI?

  1. It creates and provides a scalable project structure
  2. It initializes a git repository and makes an initial commit.
  3. It handles all common tedious tasks out of the box.
  4. It generates initial HTML, typescript, CSS and unit test files.
  5. It creates a package.json with all the dependencies of Angular 2 and installs all the dependencies of node (npm install)
  6. It configures Karma to execute the unit tests with Jasmine
  7. It configures Protractor to execute the end-to-end tests.
  8. It creates all the files necessary to create a build of the application for production.

Let’s start with the basics

How to install Angular CLI ?

Please note that Angular CLI is a node package and it requires a node version > 4.

To check the node version on your machine. use the command – node -v

If you do not have node installed or you have node that is older than version 4. Then, you should consider updating the node. At the time of this blog post, the latest stable version of node is v7.9.0.

Once you have node installed, you also have the NPM (node package manager) installed. To check if you have npm installed, all you need to do is – npm -v

Now, we can install the Angular CLI using –

npm install -g angular-cli

Now we are ready to use Angular CLI

Using Angular CLI –

  1. ng new <<name_of_the_project>>

Creates an angular project with all the files, folders, packages, needed and npm install is executed.

Ex 1 – ng new hello-world (a project is created in the directory that you are currently on)

Ex 2 – ng new path/hello-world (Using a relative path)

Note –

Project name “01-test” is not valid. New project names must start with a letter, and must contain only alphanumeric characters or dashes. When adding a dash the segment after the dash must also start with a letter.

Project Structure – This is how an angular 4 project structure looks. 

  1. ng init

Creates an angular project in the current directory

mkdir name_of_the_project
// Creates a directory
cd name_of_the_project

ng init
  1. ng serve (also, ng server and ng s)

Serves an angular project via a development server. It’s very similar to “npm start”.

If the ng-serve is executed without any errors, then you should be able to check the result in localhost:4200/

Note – The default port used is 4200 and the live reload port used is 49512.

To use othert ports –

ng serve —port 4201 —live-reload-port 49153
  1. ng generate (also, ng g)

Creates Components, Services, Pipes, and other classes.

Scaffold Usage
Component ng g component my-new-component
Directive ng g directive my-new-directive
Pipe ng g pipe my-new-pipe
Service ng g service my-new-service
Class ng g class my-new-class
Guard ng g guard my-new-guard
Interface ng g interface my-new-interface
Enum ng g enum my-new-enum
Module ng g module my-module

 

  1. Using pre-processors using Angular CLI

Using style flag during the creation of the angular project

ng new my-project —style=sass
ng new my-project —style=less
ng new my-project —style=stylus

Using ng set – To add or modify the preprocessor in an existing project-

ng set defaults.styleExt scss
ng set defaults.styleExt less
ng set defaults.styleExt styl
  1. ng lint

To perform the js lint. To check for errors like – unused imports, unexpected spaces, extra comma etc.

If we declare the variable in the below manner and run ng lint –

myVariable : String; ❌

Expected no space before colon in property-declaration.

myVariable: String;

  1. ng test

Executes the jasmine unit tests in the spec.ts files and returns the details of the total tests run and those that failed.

  1. ng e2e

Runs the end to end tests written using the Protractor framework in the e2e folder

  1. ng build

Generates a build (development build, production build).

The build is generated inside the dist folder, which is ignored by the version control.

Creating a Development build

ng build --target=development --environment=dev

OR

ng build --dev --e=dev

OR

ng build --dev

OR

ng build

Creating a Production build

ng build --target=production --environment=prod

OR

ng build --prod --env=prod

OR

ng build --prod

Note that the development build is neither obfuscated nor minified. Also, it doesn’t consider the browser caching which might lead to unexpected behaviour of the application.

  1. ng version

Returns the versions of angular-cli, node and the operating system.

  1. ng help

Outputs all the available angular cli commands.

All You Wanted to Know About Yarn Package Manager

Yarn Package Manager

Package management for the web has come a long way, there are many package managers like bower, npm etc. Of late, Yarn Package Manager has gained popularity going by the google search trends. 

What is Yarn and Why is it So Popular ?

Yarn is the new javascript package manager built by Facebook and Google together to solve some of the shortcomings of the popular framework NPM.

It doesn’t mean that Yarn is going to completely replace NPM but Yarn is a new CLI that fetches modules from npm registry. Nothing about the registry will change, you will still be able to fetch and publish packages as usual.

Before looking into the reasons why Yarn is getting so much attention, let’s have to look at the shortcomings of npm.

Problems with NPM:

I have listed below the drawbacks faced by Facebook and Google team. Chances are you’ve never encountered these problems with npm. We are going to compare npm and yarn, so you can decide which one is best for you.

  • Queued Installation: While getting dependencies from its repository, npm installs it one after another which takes up a lot of time.
  • Single Registry: NpmJS has lot of packages, but if the package you’re looking for is not in NpmJs, then you have to use a Cdn or other options
  • No Offline Installation: Npm doesn’t support offline installation as it doesn’t cache packages while installation. This means if you have installed an application before and try to install the same application offline, Npm gives an error

Yarn vs Npm – The Differences:

Yarn looks similar to Npm at first glance, but when you look closer and try using it, you will understand what makes yarn a great tool.

  1. Parallel Installation: While getting the packages from registry Npm and Yarn carry out a number of tasks. In Npm these tasks are done one after another. Yarn executes these tasks in parallel and reduces the installation time.
    Let’s compare the time Yarn and Npm take to install the same package. Before comparing you need to install Yarn globally, by using the below mentioned command.
 npm install -g yarn


Yes, I know it feels like using internet explorer to download Google Chrome 🙂
I will be installing express and gulp to compare the time difference between npm and yarn, you may try installing any package you like.

The Result:

  • Yarn: 32 seconds
  • Npm: 52 seconds

I know the difference for 2 packages is not much (20 sec) but imagine a project with a number of packages and the amount of time Yarn can save you. The bottomline, Yarn is faster.

  1. Yarn lock file: Imagine a scenario where you have created an app and it’s working as expected,  but after a while the app crashed due to package install version mismatch. To avoid this, yarn creates a lock file with all packages version when installed and updates the same on package changes.

In npm npm shrinkwrap command generates npm lock file and npm will read it before installation. The major difference here is that yarn creates and updates the lock file automatically but npm updates lock file only if it exists.

  1. Clear Output: If you run npm install command npm will list all installed packages, on the other hand yarn will display less commands like resolving packages, fetching packages, linking dependencies, building fresh dependencies with emojis (except for windows users) 🙁

Yarn Cli Commands:

There are two ways to install yarn, if you haven’t already done it. First is using the npm command below and second is from the official download page.

 npm install -g yarn

Note: Both methods need NodeJs already installed on your machine.

Yarn does not intend to replace npm as mentioned earlier. It uses the same package manager to install dependencies and save them to node_modules folder.

Initializing package.json using Yarn:

If you need new package.json file to initialize the dependencies tree you can use the command given below.

 yarn init


If you have used npm earlier, then this command will look familiar. This command will ask a similar question, that npm asks while initializing package files like  name, version, description etc.

Yarn Install:

In npm npm install command will install all the dependencies listed in the package.json file and also allow you to add other packages, However, with yarn there is a slight difference, yarn install command will only install packages from yarn lock file or package.json file. For adding or upgrading packages there is a different command in yarn.

Adding, Upgrading, and Removing Dependencies:

Adding: Similar to npm install [package name], the command yarn add [package name] allows you to add and install dependencies. As the name of the command implies, it adds dependencies, meaning it adds the reference of package in yarn lock and package.json file.

Yarn add command adds the dependencies to dependency section, if you need to add dependencies to the devdependency section you need to add --dev flag to the command.

Upgrading: This command yarn upgrade is similar to npm update and will update all the packages to their latest version. If you specify the package name and version number it will upgrade the package to that specific version.

For instance, yarn upgrade gulp@3.9.1 will update gulp to version 3.9.1.

Removing: This command will remove the package from node_modules folder and update the same in package.json and yarn lock file. yarn remove [package name].

For example, yarn remove gulp will uninstall gulp and update the same.

Stability and Future:

There were some issues at the time of Yarn launch but the rate of resolved issues is astounding. It indicates that the community is working hard to find and resolve the bugs. Yarn appears stable for most users, including yours truly. I have been using yarn for more than two and a half months and haven’t faced any issue.

The future looks bright, as it is backed by Google and Facebook. Yarn will get actively developed which will make it the default package manager or get forked and used as the official npm package manager.

But nonetheless, the project looks very promising and I would definitely recommend trying Yarn on single projects.

Conclusion:

In the final analysis, Yarn scores way higher than npm. We are getting fast package installation, lock file and much more. You can try it on a pilot project and see if it works for you. Atleast give it a shot, it’s worth a try.

And for those of you, using yarn already in your projects please share your experience with Yarn so far. Let me know your take in the comments below.

Idyllic Featured as one of the Top Ruby on Rails Developers

Top Web Developers

Clutch, an independent ratings and reviews firm based in Washington, D.C., has featured Idyllic as a leader in Ruby on Rails web developers, giving Idyllic the #2 rank in Rails technology.

Clutch evaluates hundreds of web developers across the globe. Their methodology is based on a number of qualitative and quantitative criteria, including previous work, market presence, client accolades, among others.

Most importantly, the backbone of Clutch’s scoring methodology comes from verified client interviews of our web development services, in which Clutch speaks directly to our clients to get a sense of the services provided.

Here are some client quotes in the interviews conducted by Clutch:

One client, the founder of a travel abroad SaaS company, stated:

“They delivered good results and didn’t simply code based on their individual egos, giving us things which we didn’t need. Idyllic worked based on our business goals, listened to our instructions, and took initiative appropriately.”

When asked about his overall satisfaction with our web development services, he also added:

“After spending time with Idyllic’s team and their founder, I’ve come to value their integrity and leadership. Whenever we had any concerns or questions, Idyllic’s principal responded to us. I could tell that his team believed in his leadership and followed the integrity he set in terms of how to work with clients, how to follow their needs, what the code quality standard is, and so on. Idyllic’s principal does an excellent job in exuding his passion for technology. I’ve come to look for the same elements in other relationships.”

Another client said:

“The marketing team was thrilled at how rapidly improvements and changes were being made to the website. Our internal engineers also liked the code quality delivered by Idyllic. I considered them a lifesaver, especially since I was responsible for other projects while keeping internal customers happy.”

As 2016 came to a close, Clutch also recognized Idyllic as one of the Top Web Developers in India in their research.

We’re excited to be included in Clutch’s research, which highlights our focus on providing high-quality work with great customer focus and ranks as a top web developer. A shout-out to all our enterprising clients who expressed their faith in us and Clutch for their in-depth interviews and diligence. For more information, do check out Idyllic’s reviews on Clutch.

6 Best Practices for Leading a Development Team

When you are the team lead of a small or a big development team, you have many responsibilities on the project that you don’t as a developer. You become a key player in ensuring successful delivery of the project. Here are 6 things to keep in mind as a lead.

Architecture

This is one of the key aspects of your responsibility as a team lead, Visualization of the application architecture. This will help you every time you want to extend the application or accommodate  any changes. It will also help you guide the development team in the part of the application (s)he is working on.

On top of everything when the development is in full swing, you will be able to refer your architecture diagram to ensure you are adhering to the software engineering principles you had in mind while creating the architecture.

Code Quality

This is another very vital responsibility that one needs to execute. You will be making sure the overall quality of the codebase is maintained. This can be achieved by doing a few things.

  1. Code Reviews
  2. Defining Coding Standards
  3. Knowledge sharing in the team

Making Progress

As a developer, you are usually tempted to revisit your code and come up with a better design or maybe you want to refactor the code you have written. At times the development team may go overboard with this refactoring and over engineer a solution. As a lead, you have to keep track of the deliverables and the cost the business is incurring. At times a good solution is feasible and, you need to draw the line for the development team keeping the business prospects in mind.

Planning and Process

Being a lead you will be one of the custodians of the process. You will need to ensure that the team is following the process strictly. This will help you stay updated with the current state of the project. At the same time, you can fix ownership and accountability of the project. With a solid execution plan, you can manage expectations from the business well. The lead should spend some time to periodically revisit the roadmap.

Communication

As a lead you are involved in two-sided communication. You are presenting the business side of story to the development. At the same time you are presenting the development team’s perspective and challenges to the business. If you can communicate effectively on both the fronts you will be able to ensure smooth execution of the project.

Writing Code

As a lead, you are always in a dilemma whether you should be writing code or you should be mentoring the team and be involved in the planning. The simple answer is YES you should be writing code (a certain Zuckerberg still does). The advantage is that you are always up-to-date with the codebase. You avoid the situation of making uninformed decisions. Also, as a good leader, you should work with the team in trenches. At heart you are still a developer, you will enjoy this part the most and it will refresh your mind.

 

Build your own mini Heroku powered by Docker in 3 simple steps

Heroku, as you are most probably aware of, is a popular PaaS (Platform as a Service) that allows developers to easily deploy their applications developed using different languages or frameworks to the cloud using a simple ‘git push’. This saves you from having to provision a server yourself for deploying your app which involves installing all the required dependencies on the server, checking config files etc. There are tools to automate server provisioning and deployment for example Vagrant, Chef, Puppet, Ansible, F*cking Shell Script etc, but they have their own learning curve. A PaaS like Heroku takes care of all the deployment for you, literally taking care of the ops in devops so that you can focus on developing great apps rather than on also building the required plumbing. However, this power comes at a cost. They do have cheaper plans for hobbyists but deploying any serious application using Heroku today costs a minimum of $50.

 

However, if you don’t need all the extras that Heroku offers and just need the simple git push based deployment and Buildpack support then there is an open source alternative called Dokku that anyone can run on their own Linux server running the 64 bit version of Ubuntu 14.04, and have their own PaaS up and running in minutes. For example, if you are using DigitalOcean, you can have your own PaaS up and running in minutes for as low as $5 a month. For the curious, Buildpacks are what Heroku uses to support the automatic deployment of applications developed using different languages or frameworks like Rails, Node.js, Java, PHP etc and there is a separate build pack for each platform. Since all the official build packs have been open sourced by Heroku, Dokku is able to use all of them and you can view the list of those here.

 

Dokku describes itself as ‘Docker powered mini-Heroku’ and is written in around 200 lines of bash script !. It uses docker to emulate the Heroku-like experience. And though you don’t really need to know a lot about docker and containers to get started with Dokku, however being familiar with them will help you understand how Dokku works and more effectively use it for your own purposes. So before I demonstrate Dokku I will provide a brief introduction.

 

Docker is the software that allows you to create and manage containers. Containers are analogous to virtual machines in the sense that they allow you to isolate multiple processes or applications. But unlike virtual machines which each have their own copy of the entire operating system, containers share the operating system kernel of the host they are running on, which allows multiple containers to run at the same time on the host operating system which makes them extremely fast and resource efficient as compared to virtual machines.This is made possible by Linux kernel features like LXC and layered file systems like AuFS.  Also since each application is packaged with the entire runtime and dependencies it needs to run, in a container, it can be deployed as is on any system with Docker installed so it doesn’t matter which system you used to develop your app and which one you are deploying it on making your apps truly portable as in ‘write once run anywhere !’ .

 

Now container based virtualization support has been available in Linux kernel since long but it was not simple to use.Docker has made containers easily accessible to everyone.However, Docker alone does not allow for the easy Heroku-like deployment. And this where platforms like Dokku come in which stand on the shoulders of giants and use Docker under the hood and also various other open source projects like gitreceive, build step and Nginx to provide the PaaS experience.

 

To demonstrate Dokku I will deploy a simple Node.js application to a Linux VPS running Ubuntu 14.04 x64 in 3 steps.

 

Step 1. On your local machine run the following command to fetch the sample node.js app.
Step 2.  (One time step) On your server install dokku using the following two commands as a user with Sudo access. If you are using Digital Ocean then you can entirely skip these two commands and use their one-click install to create a server with Dokku pre-installed.
Then just navigate to your server ip and add your domain (if you have one) and add your public ssh key to finish the configuration.

 

Step 3. And finally, the below 2 commands in your application directory to deploy the app !.You can change my-node-app to whatever another name you want.

 

And voila, within a few minutes your app would be up and running at ‘http://your-domain-or-server-ip-here’. It’s really that simple!.

 

To get a peek behind the scenes and confirm that the application is really being run inside a container, run the command ‘docker ps‘ on the server and see the container running the nodejs application.

 

For more information on dokku checkout http://dokku.viewdocs.io/dokku/. Also, there is another project called dokku-alt, stands for dokku-alternative, which is a fork of dokku and comes with a lot of commonly used plugins like data stores etc preinstalled. Hope this post has whetted your appetite for Dooku in particular and also docker and container technology in general and you would explore it in more detail. If you need further help or clarification on anything above then you can reach out to me at raza@idyllic-software.com. Happy Hacking !.

Working with Logstash

Logstash a centralized tool to collect and aggregate logs. It is so intuitive and it’s configuration are so easy to understand that you would just love it.

The post describes how to work with Logstash and Logstash configuration.

In nut shells, Logstash is composed of three main components.

1.Input
2.Filter
3.Output

Input :  What is the medium/source through which Logstash would receive your log events.
A valid input source could be stdin,tcpudpzeromq etc. In fact, Logstash has a wide range of input tools which you can choose from.(to get full list input plugin click here)

The input block essentially looks like this.

input {
   stdin {
      codec => 'plain'
    }
}

 

Output : The source or medium to which the Logstash would send or store it’s event.
Just like input Logstash provide a wide range of Output plugin as well.

The vanilla output block looks like this

output {
   stdout {
      codec => 'rubydebug'
   }
}

If you really aren’t considering to perform any filtration on data or log message you receive, most of the times the above blocks(input and output) is sufficient to start with Logstash.

Note: We are making a minor adjustment in our working example. Instead of using the stdin we would be using tcp as the input plugin.

A final look at our configuration.

## logstash.conf
 input {
   tcp {
      port => '5300'
   }
}

output {
   stdout {
      codec => 'rubydebug'
   }
}

 

Testing Configuration –

logstash -f logstash.conf --configtest

 

Loading the Logstash

logstash -f logstash.conf

 

You might get a little help from the below screenshots to understand how Logstash output looks like.

Note: I had used Telnet to send logs to Logstash.

@timestamp: An ISO 8601 timestamp.
message: The event's message. 
@version: the version of the event format. The current version is 1.
host: host from which the message / event's was sent. 
port: port of the client.

 

Filter Filter plugin, are used to massage(filter) the logs(if needed) so that one modify the received log message before output(ting) it via output plugin.

Simple a filter block look like this. (we will explore this in our next example)

filter {
   grok {
     ## grok filter plugin 
   }
}

 

To explain the power of Logstash, let us just work with a demo example.

Here we have an application which generates logs of various types

– Custom debugging logs.
–  SQL logs etc.

Example.

[20-JUN-2016 14:00:23 UTC] Received Message

[20-JUN-2016 14:00:24 UTC] Before query the IP Address
(1.0ms)  SELECT "ip_addresses"."address" FROM "ip_addresses" WHERE "ip_addresses"."resporg_accnt_id" = 3
[20-JUN-2016 14:00:24 UTC] After query the IP Address
[20-JUN-2016 14:00:24 UTC] The Ip address found is X.X.X.X

[20-JUN-2016 14:00:27 UTC] Quering ResporgID
ResporgAccountId Load (2.0ms)  SELECT resporg_account_ids.*, tfxc_fees.fee as fee FROM "resporg_account_ids" AS resporg_account_ids LEFT JOIN ip_addresses ON resporg_account_ids.id = ip_addresses.resporg_accnt_id LEFT JOIN tfxc_fees ON resporg_account_ids.id = tfxc_fees.resporg_account_id_id WHERE "resporg_account_ids"."active" = 't' AND (((ip_addresses.address = 'x.x.x.x' AND ip_addresses.reserve = 't') AND ('x.x.x.x' = ANY (origin_sip_trunk_ip))) OR (resporg_account_ids.resporg_account_id = 'XXXX') OR (resporg_account_ids.resporg_account_id = 'XXXX'))
[20-JUN-2016 14:00:27] Resporg ID is TIN

[20-JUN-2016 14:00:29 UTC] Querying Freeswitchinstance 
FreeswitchInstance Load (1.0ms)  SELECT  "freeswitch_instances".* FROM "freeswitch_instances" WHERE "freeswitch_instances"."state" = 'active'  ORDER BY "freeswitch_instances"."calls_count" ASC, "freeswitch_instances"."average_system_load" ASC LIMIT 1
[20-JUN-2016 14:00:29 UTC] FreeswitchInstance is IronMan.

[20-JUN-2016 14:00:29 UTC] Get the individual rate
IndividualCeilingRate Load (0.0ms)  SELECT  "individual_ceiling_rates".* FROM "individual_ceiling_rates" WHERE "individual_ceiling_rates"."resporg_account_id_id" = 7 AND "individual_ceiling_rates"."originating_resporg_id" = 3 LIMIT 1
[20-JUN-2016 14:00:29 UTC] The individual rate is 20

[20-JUN-2016 14:00:30 UTC] Query the individual rate
Rate Load (1.0ms)  SELECT  "rates".* FROM "rates" WHERE "rates"."resporg_account_id_id" = 3 LIMIT 1
[20-JUN-2016 14:00:30 UTC] The Selected rate is 40

 

Now, we need our system to output(or store) the logs based on their type(SQL and Custom type)

This is where the power the Filter(plugin) outshine.

GROK filter plugin

A closer look at filter(grok) plugin suggests that one can add a regex for the incoming log events(for filtering).
Note: Grok has a wide range of regex pattern (120+) that you can choose from. But it’s power is not limited to predefined regex pattern. In fact, one can provide a custom regex pattern as well (like in our case)
In our cases, we can apply regex on either SQL or Custom logs(we are choosing SQL message) and then segregate them.
Note. If you need help building patterns to match your logs, you will find the grokdebug and grokconstructor application quite useful.

The Regex –

Let’s define our configuration now.

## input the log event via TCP.
input {
   tcp {
      port => '5300'
   }
}

filter {
  ## apply this filter only to log event of type custom
  if ([type] == "custom") {
    grok {
       ## load your custom regex pattern 
       patterns_dir => "./pattern"
       ## Compare the message with the you applied regex
       match => { "message" => "%{ARSQL:sql}" }
       ## if the message matched the given regex apply a field called "grok" match
       add_field => {"grok" => "match"} 
    }

  ## if the field has a grok match, which means that  above regex match
   if ([grok] == 'match') {
      ## apply mutate filter plugin to replace the type from CUSTOM to SQL
      mutate {
        replace => {"type" => "sql"}
        ##  remove the grok field that was added in the earlier filter
        remove_field => ["grok"]
       }
    }
  }
}

## output plugin. For now we will be using rubydebug but we can every easily used any of the output plugin 
output {
   stdout {
      codec => 'rubydebug'
   }
}

 

Let examine output

{
       "message" => "Received Message",
      "@version" => "1",
    "@timestamp" => "2016-06-20T14:00:23.320Z",
          "host" => "werain",
          "type" => "custom" ## custom tag
}

{
       "message" => "(1.0ms)  SELECT "ip_addresses"."address" FROM "ip_addresses" WHERE "ip_addresses"."resporg_accnt_id" = 3
",
      "@version" => "1",
    "@timestamp" => "2016-06-20T14:00:24.520Z",
          "host" => "werain",
          "type" => "sql" ## we have successfully managed to change the type to sql(from custom) based 
                          ## on the grok regex filteration
}

Notice the type SQL being mutated(replaced) in place of a custom type.
Note:  Well if that is not enough you can ask a LogStash to filter the event from an external program.If you want you simply try my demo example and LogStash configuration defined over here and here

That all folks. I hope I manage to do justice to the amazing library called LogStash which has simplified my tasks of log-management to such ease.

Thanks.

 

Use of Amazon SES in Rails App to Send emails

What is Amazon SES?

Amazon SES is an email platform that provides an easy, cost-effective way to send and receive email using your own email addresses and domains.

Sending Email with Amazon SES using SMTP

You can use Simple Mail Transfer Protocol(SMTP) to send emails by using Amazon SES. You need an Amazon SES SMTP username and password to access the Amazon SES SMTP interface.

To create SMTP credentials:

1. Sign into the AWS Management Console and open the Amazon SES console at https://console.aws.amazon.com/ses.

2. In the navigation pane, click SMTP Settings.

3. In the content pane, click Create My SMTP Credentials.

smtp_settings

4. In the Create User for SMTP dialog box, you will see that an SMTP username has been filled in for you. You can accept this suggested username or enter a different one. To proceed, click Create

create_smtp_user

5. Click on Show User SMTP Credentials. Your SMTP credentials will be displayed on the screen; copy them and store them in a safe place. You can also click Download Credentials to download a file that contains your credentials.

After creating your SMTP credentials, open the Verified Senders/Email Addresses screen. Before you can send an email using Amazon SES, you have to own an address or addresses which are going to be used as senders across the SES SMTP mail servers.

verify_a_new_email_address

Configure mailer with SMTP with rails app:

Add the following code in config/environments/*.rb

Then add following code in mailers/ses_mailer.rb

Note: If your account is in Amazon SES sandbox, then you must also verify the email address of every recipient. For moving out of the sandbox, see Moving Out of the Amazon SES Sandbox.

View: view/ses_mailer/welcome.html.erb

Call Mailer:

You can track your email sending statistics on Sending Statistics of Amazon Console.

Sending emails using aws-ses gem:

1. Add aws-ses to the gem file – https://github.com/drewblas/aws-ses

2. Extend ActionMailer in config/initializers/your_config_file.rb, where config_file.rb is the name of the file which contains initialization routines for ses gem:

3. Then set the delivery method in `config/environments/*.rb` as appropriate:

You are now ready to send emails from rails app through Amazon SES. Just make sure you have verified email addresses through which you are going to send the emails and you are out of Amazon SES sandbox. Happy Mailing!!

WebRTC Opentok in Ruby On Rails for Text chat

This post will take you through adding text chat functionality using Opentok (a WebRTC platform) in Ruby on Rails. OpenTok is a WebRTC platform for embedding live video, voice, and messaging into your websites and mobile apps. Opentok has its own signalling server. We will be using the OpenTok signaling API.
First, we will make the basic setup for using Opentok service. Here we will be using ‘opentok’ ruby gem.

1) Add opentok gem to your Gemfile.

gem ‘opentok’

 

2) Then create a session that will attempt to transmit streams directly between clients.

Store session id and token somewhere in the database, so that you can use it later on.
Here token gets expired in 30 days(default value). You will need to regenerate after 30 days.

3) Now it’s time to connect to session that we have created above, Add this to your HTML page,

A signal is sent using the signal() method of the Session object. One can receive a signal by listening to a signal event dispatched by session object.
So, All clients who are connected to the same session and listening to same signal type(‘text_chat’ here) will receive a message as soon as someone publishes to channel ‘text_chat’.

Some Flexbox basics – CSS tricks

Initially aligning block level elements besides one another to achieve multiple column layouts was a bit tedious,but then Float property was introduced so as to achieve multiple column layouts. Float property was widely used by coders and still is being used.

But there are some drawbacks in using float, which makes a developer rethink in using it in a layout where there are multiple columns and proper alignment is a necessity.

Problems with using Floats

  • 1. A child element particularly an image when placed inside a floated parent item, tends to overflow outside the parent element thus pushing down the other floated element beside the parent item.
  • 2. A float property when used needs to be cleared.
  • 3. Items cannot be centered vertically.

I have just listed a few ones but there are many, and so as to overcome this shortcomings Flexbox was introduced.

Advantages of using Flexbox

  • 1. Flexbox helps align items and distribute it equally on a vertical and horizontal axis in a container.
  • 2. It also ensures that the layout won’t break at different screen sizes and on different devices.
  • 3. It can be really helpful with dynamic content when the size of the items is unknown.
  • 4. Flexbox contains a set of properties that can be used to make the layout flexible.
  • 5. One can align the items both vertically and horizontally center.
  • 6. Flexbox properties when used, the height and width of the items gets adjusted by default to fill the available space or shrinked to prevent overflow

USAGE:

Starting with the implementation firstly set the display property to the parent container as below.

.flex-container {
   display: flex;
}

Or can use it as an inline element:

.flex-container {
  display: inline-flex;
}

More Flexbox Container Properties:

1. flex-direction: This property sets the direction of the parent container. With this the items inside the parent container gets aligned vertically or horizontally.

.flex-container {
    flex-direction: row | row-reverse | column | column-reverse;
}

2. flex-wrap: This property helps to set items in single or multiple lines.

.flex-container {
   flex-wrap: nowrap | wrap | wrap-reverse;
}

3. flex-flow : A shorthand property for setting the flex-direction and flex-wrap properties together.

.flex-container {
  flex-flow:  <flex-direction> | <flex-wrap>;
}

4. justify-content : This property helps to distribute the flex items accordingly.

.flex-container {
  justify-content: flex-start | flex-end | center | space-between | space-around;
}

5. align-items : It is similar to justify-content but helps in aligning items in perpendicular direction with the flex container.

.flex-container {
  align-items: flex-start | stretch | flex-end | center | baseline;
}

6. align-content : This property aligns flex items accordingly filling out the extra space within the parent container.

.flex-container {
  align-content: stretch | flex-start | flex-end | center | space-between | space-around;
}

I too use Flexbox, its simple to implement and saves a loads, no need to use frameworks or create any responsive grids.

Thanks

Subscribe To Our Blog

Get access to proven marketing ideas, latest trends and best practices.

Next up home

Contact

Lets build cool stuff

Share your contact information & we will get in touch!

I want (Tell us more about your dream project)