Category: Ruby on Rails ++

Using HTTPProxy in Net/HTTP globally.

There comes a time when you want to Proxy your Net/HTTP request/response in Ruby.One of the easy ways to do this in NET/HTTP is by passing proxy parameters in ‘.new‘  method (during initialization)

proxy_addr = 'your.proxy.host'
proxy_port = 8080

Net::HTTP.new('example.com', nil, proxy_addr, proxy_port).start { |http|
  # always proxy via your.proxy.addr:8080
}

Obviously, this works but then querying the website using GET or GET BY URI method there is no provision to pass any proxy parameters altogether(at least that’s what I found out.)

Thankfully Ruby-2.0+ (not available in lower versions though, can be seen over here) has a feature with which you can provide proxy as your ENV variable (http_proxy environment variable) and ruby take care of the rest.

So, upon setting  http_proxy in ENV I found, BAAM!!!  it worked and it worked globally (i.e it works even on GET and GET BY URL methods mentioned above.)

BTW, I had the following declaration inside my .bash_profile.

export http_proxy= ‘http://127.0.0.1:8080‘  (8080 is the port where my Squid Proxy is functional)

Thanks

 

Hack with Rack

Hello everyone, Hope you are having a good day.
Ok, some years ago, one of our clients wanted us to implement Social Network (FB,Twitter,Linkedin) sharing functionality. Knowing all the OAuth implementation that one has to deal with, we decided to use this awesome gem called OmniAuth that does all the heavy lifting tasks of OAuth Authorisation with very basic minimum implementation.
So with OmniAuth we manage to get the work done in time but then we wanted OAuth Authorisation to work with Ajax call as well. I remember going all out crazy (Googling + debugging OmniAuth code) to find a way to do it but honestly we couldn’t find any way do it. (Note: This was some years back I’m not sure about the status as of now.)
So we decide to write our own “Rack Middleware” that tampers with the Omniauth Response and treats the ajax request the way we wanted.
The Middleware

 

and then we insert the Middleware where we wanted it

 

Let look at our middleware stack
Yup that’s it- a “Hack with Rack” to get a concrete solution for our problem that isn’t too fancy but still does the task we needed getting done.

Building an OCR using PDF.js and PDFText

A couple of years back I was handpicked (not really 🙂 ) for a project that required me to work and build an application that would extract text from the PDFs uploaded into the system, using a bunch of baseline co-ordinates mapped against a sample pdf at start. The Project was named Invoice Analyser (IA) – since all the PDFs were actually invoices of our client.

I still vividly remember the day when I was made aware of the requirement on call and soon after the call ended, I knew it was going to tough but I kept faith in myself.

If I had to sum up the requirements, it would fit in 3 important points.

1. Display PDF Online.(baseline/sample pdf)
2. Map a set of co-ordinates of mark up area in a sample pdf (which is to be displayed online).
3. Extract the text from all the specified co-ordinates in any pdf uploaded henceforth (this is important because I needed those sets of co-ordinates to create a baseline sample, that once created, would be used against all similar looking PDFs to extract text out of them from relevant co-ordinates).

Now knowing all the above scenarios, I started with something that I believe everybody does (pray, no 🙂 ): googling. Luckily, I managed to knock off task 1, thanks to the amazing library called PDF.js

Those who don’t know about PDF.js well it’s an amazing Open Source library (perhaps maintain my mozilla) that focuses on a general-purpose, web standards-based platform for parsing and rendering PDFs. As far as I remember, it uses HTML5 canvas to display the PDF online (so a non HTML5 browser beware it might not work for you ).

Now, with 1 down, I concentrated my focus on tasks 2 and 3 and this was challenging. Upon my examination, I found that the co-ordinates mapped on PDF.js displayed(online) was way different from the actual pdf. Here is a link in which I asked this question on Stack Overflow (seriously after reading the first answer by @DanielLi, my confidence was deeply dented.) until, one fine day I finally got it. “That’s it. I found it. Eureka Eureka!” (was my initial reaction).

So to map the precise co-ordinates in PDF.js with respect to the actual pdf, one thing that played an important role was ASPECT RATIO [1](even though PDF render by PDF.js had different co-ordinates from the original PDF, it still maintains the same aspect ratio). So, with a little bit of jQuery and a little bit of Mathematics, I finally managed to nail task 2. Look I have also answered my own question here. (describing how I did it)

Now with 1 and 2 knocked off, task 3 was actually pretty easy. I came across 2 amazing libraries,
iText
PDFbox

that would return the specified text for the supplied co-ordinate. I went with iText (choosing iText over PDFbox was a personal choice)

End Result:

Disclosure : –
The project is almost 3 years old now. I’m aware that PDF.js has matured quite significantly now. For all those whose end result differs from mine(where I need to create baseline co-ordinates from the sample to use it to extract text from any pdf uploaded in future) you might want to look at the following example.

Also if you have a requirement similar to mine, you might just want to look at this answer as well

I stumbled upon this issue as the OP did and the answer helped me a lot.

So to map the extract co-ordinate in this case, just interchange your length and breadth based on orientation.

All in all, the project was seriously a challenge and being able to knock it off within the specified time frame is something I’m still proud of.

I have open sourced a demo version of the OCR which can give you a head start in building something similar to what I did.

GITHUB SOURCE

Face recognition in Ruby Using Kairos also Finding your celebrity-look-alike

Humans have an innate need to be identified with a group, that drives us to be an important part of something bigger than us. This implies a relationship that is greater than familiarity or acquaintances. “Facial structure” plays a big part in that identity, making online services like find-your-celebrity-look-alike a guaranteed success.

Today, we are going to look under the figurative hood: How this technology works. We will be building a sample find-your-celebrity-look-alike application on Ruby using “Kairos”, a Third party face recognition api. With a simple example, we will discover how face recognition works, how we use data from kairos and briefly also touch on collecting and cleaning up celebrity facial data. The other alternative to using Kairos api includes face++, Animetrics.com or rekognition.com.

The first step would be building a cache of celebrity faces. Before the advent of publicly available face data on the internet, this tedious job had to be done manually but thanks to crowdsourcing, now a lot of public repositories are available with “facial” data, to use for non-commercial purposes.

One such repository, FaceScrub is a database of 1,00,000 photos of 530 celebrities classified by gender and name.

A snippet of it looks like this

source http://vintage.winklerbros.net/Images/facescrub.jpg 

 

So to begin, first download the data set using the following command.

$ curl http://vintage.winklerbros.net/faceScrub.zip

Unzip the file.

$ unzip faceScrub.zip

The compressed file is password protected and you can get the password by filling out this form . This would yield a file with the name: facescrub_actors.txt . You can view the content of the file by using vim.

$ vim facescrub_actors.txt

$ vim facescrub_actresses.txt

Now you have a file that acts as an index to the final cache of face images of celebrities we need. So now we have to write a ruby script to read this file and create a cache in a new directory.

$gem install kairos-api

$gem install typhoeus

Unified code to upload the data and match your facebook profile photo to the celebrities

#ex: facebook_user_name : shashank.singh
facebook_user_name = "put_your_facebook_user_name_here" 
urls_hash = []
begin
#Repeat same code for facescrub_actresses.txt
file = File.new("facescrub_actors.txt", "r")
while (line = file.gets)
content = line.gsub(/\s+/m, ' ').strip.split(" ")
urls_hash << content[4]
# puts "#{counter}: #{content[4]}"
end
file.close
rescue => err
puts "Exception: #{err}"
end
#Counter is used to set subject_idr
counter = 1
result = ""
require 'socket'
require 'kairos'
require 'typhoeus'
#Get App Id and App Key from https://developer.kairos.com/admin
client = Kairos::Client.new(:app_id => 'put_your_app_id_here', :app_key => 'put_your_app_key_here')
urls_hash.each do |url| 
begin
result = client.enroll(
:url => url, 
:subject_id => counter, 
:gallery_name => 'celebrities')
rescue => e
puts "Error importing this url because of " , e
end
counter = counter + 1
puts "[#{counter}] #{result}" 
end
#You can list all files Enrolled
# puts client.gallery_view(:gallery_name => 'celebrities')
#Lets get your facebook profile photo and match to 530 celebrities we have
graph_url = "http://graph.facebook.com/#{facebook_user_name}/picture?type=large"
#Lets print which celebrities face matches our’s
puts client.recognize(
:url => graph_url, 
:gallery_name => 'celebrities', 
:threshold => '.2', 
:max_num_results => '5')

In my next post, we will take the concepts discussed here and create a full blown web application in Ruby on Rails .

 

Require.js with Rails

If you know what require.js does and Ruby on Rails is your thing, then you have come to the right place. If not, I encourage you to read up on following topics:

  1. Modular approach to writing JavaScript
  2. AMD and JavaScript
  3. Dependency management, Module Loading, and Lazy Loading

Installation and Setup

  1. Add ‘requirejs-rails’ gem in the Gemfile of your project. It relieves you from some manual labour.
  2. Get rid of everything from your application.js which is located under /assets/javascripts/application.js
  3. Now locate your application layout under views/layouts/application.html.erb
  4. Replace yourwith
  5. Do ‘Bundle Install’ and Restart rails server. When your server is back up and running, view page source and you should see following in your HEAD
  6. <script data-main="application" src="/assets/require.js"></script>

6. Avoid config.assets.precompile

Test if all of the steps above were successful

  1. Open application.js
  2. Paste the following code in it

If everything went well thus far, you should see an alert and if you are using Google Chrome, you should also see that in ‘Developer Tools’, under ’Network’, require.js and then application.js loaded.

Start writing your modules

Let’s say you need an Ajax loader for your Ajax requests and you wish to write a separate module for it. Great idea. Now, let me tell you how you can write a module that can be loaded with require.js. In other words, this module that we are about to write will comply to AMD (Asynchronous module definition)

Basic module syntax:

ajaxloader.js

You define a module by using ‘define’ and it usually takes 2 arguments 1st of which is an array of its dependencies and 2nd is an anonymous function. You can either return a value, another function or an object literal as above.

Basic module with dependencies:

Let’s say our ajax loader module depends on jQuery. jQuery needs to be passed as a reference. Like this

You must wonder, how does ‘define’ find dependencies? Well, we need to tell require.js about the location where it can find modules, dependencies.

Configure require.js

  1. Open your application.js and let’s add jQuery as your dependency.

Syntax:

There’s a lot of configuration options available at http://requirejs.org/docs/api.html#config however, for now, we shall focus on setting jQuery as a dependency (jQuery is AMD compatible unlike Backbone, underscore, and many other popular libraries, however, SHIM config can help you our in that regard http://requirejs.org/docs/api.html#config-shim).

Explanation:

Now it’s time to tell require.js about our ajax loader module. Follow the steps below

  1. Add the following code in application.js right below require.config chunk of code

Now, let’s take a look at our module. our first string parameter in a dependency array is ‘jQuery’ and according to our require.config jquery if CDN fails will be pulled from a ‘vendor’ directory.

Order on which require.js pulls files and sticks them in the head:

  1. require.js
  2. application.js
  3. jquery.min
  4. ajaxloader.js (Loaded after jQuery because it depended on jQuery)

In closing, require.js helps you

  1. Abstract your code without polluting the global namespace by using AMD pattern in your modules.
  2. Manage Dependencies in a much more structured way.
  3. Lazy load your modules.
  4. Optimize your modules

For more information visit http://requirejs.org/

Upgrading Rails 2 to Rails 3 is easy!

Many people might consider upgrading their Rails 2 apps since even Rails 4 is now released! We too recently upgraded the website of one of our clients – Founder’s Institute from Rails 2 to Rails 3. Migration is sometimes tricky but we outlined a plan for this and we were able to finish up quickly with least bugs. The Rails community maintains incredibly useful documentation whenever they push out major releases. It was extremely helpful for us.

When we started off, first we moved all the code into a Git repo (it was previously on SVN). We decided to break the complexities of migration into smaller, simpler parts and take small steps towards the upgrade.

The app was a fairly standard Rails 2 app with MySQL. To set up the project, the first step was to upgrade all gems. This part was mostly hassle free, thanks to Bundler.

We decided to upgrade all models to be Rails 3 way.

Active Record:

Rails 3 has a new Active Record query engine. It meant Rails now uses the Arel query engine to build SQL queries from Active Record statements. This increased the expressiveness of Queries as well as exposed a simpler API which allows developers to build a Rails compliant ORM. This means seamless integration of your favorite ORMs like Datamapper and Mongoid and is quite a leap towards user’s flexibility.

There are many new finder methods supported by Active Record:

• where(:conditions)
• having(:conditions)
• select
• group
• order
• limit
• offset
• joins
• includes(:include)
• lock
• readonly
• from

Statements like:

are now written as:

The query now looks cleaner and much more readable.

Another cool feature with Active Record is chain ability. Since all finder methods now return an object of Relation, it is easy to chain methods to filter objects and pass conditions while querying.

Active Record now encourages lazy loading of data.

To force early loading, just call the ‘.all’ method on your Active Record query statement. For example,

Usage of named_scopes changes as well. named_scope is now called just scope and it won’t accept options anymore.

is now:

Rails 3 Routes:

Next up was fixing the Rails 2 style routes. Since Rails 3 got a new router, most Rails 2 routes were either deprecated or unsupported. Fixing the routes was easy enough.
This neat little one-liner takes care of migrating most of your routes:

rake rails:upgrade:routes > config/new_routes.rb

Comparison of the new routes with the old ones gives a quick view of syntax changes in the Rails 3 routes.

Most routes like:

are now written as:

Declaring options in routes can be done using the new style:

We no longer need to pass hashes with nil values to specify optional params and they favor RESTfulness. We simply write it like this:

Action mailer:

Action Mailer in Rails 3 has a new syntax and emailers are structured in a more decoupled way.

Mailers are now defined in the app/mailers directory instead of app/models.
Variables accessible in Mailer views are directly set using @my_variable instead of @body[‘my_variable’]

Mails are sent using:
MyMailer.my_mail_method(my_args).deliver
instead of
MyMailer.deliver_my_mail_method(my_args)

The mailer method:

Is now:

The new welcome method looks more like a controller for emails where the instance variables can be accessible in the views.

To avoid having to write the “from” field for every action in your mailer, we can now choose to write:

Agnosticism with jQuery, rSpec, and Data Mapper

Rails 3 has now also adopted Agnosticism with jQuery, rSpec, and Data Mapper.

Rails 3 supports unobtrusive Javascript and one can choose to use Jquery (or any other library/framework) over Prototype which has been the default until now. We chose to use JQuery and then started migrating rails 2 helper methods used in views to be Rails 3 style. Most commonly used expression is the form_remote_tag and it required to be migrated like this:

to

The :remote => true option passed in, tells Rails that the form is going to be submitted using Javascript and we need to write methods to listen to the form submit event.

Unobtrusive javascript removes the ugly inline javascript which appears in the HTML form tag when form_remote_tag is used:

html_safe:

HTML stored as a string in a variable now has to be passed on in the views using the .html_safe method.

Bundler:

Managing dependencies of a Rails app just became so much easier with Rails 3! Bundler allows you to specify versions of gem you want to use in your application. The Gemfile can be used to specify any gems your app needs and bundler will take care of downloading the appropriate version and installing them.

Issues we faced:

Though there were last minute issues that we encountered, most of them were UI fixes where the HTML tags were displayed because of a few missed out .html_safe on strings.

Upgrading Rails apps has been pretty much straightforward; much thanks to the awesome Rails community.

When/why should you upgrade?

1. In this fast paced ever-improving world of software? Almost always!
2. Need more security. Newer versions fix security holes.
3. Code refactoring is needed.
4. Increased maintainability.
5. Help out the community. You can always give feedback on new features in the framework/gems.
6. Using plug-ins and gems that aren’t supported in older versions.
7. Easy to get help from someone with new stuff than old.

Hope this short post helps you with your migration!

Founder’s Institute on upgrading Rails 2 to Rails 3

Mongodb – not evil, just misunderstood

Lately I’ve been reading a lot about Mongodb and posts dissuading you from ever using it. Some of these articles are seriously outrageous and make me wonder what got the team to actually start using Mongodb in the first place. Sarah Mei’s recent article was one such that upset me a lot, especially since the title was so inflammatory.

My post however, aims at highlighting the areas where Mongodb works and how it performed brilliantly for us. As someone leading the engineering efforts for a shipping and logistics company I wasn’t too happy initially to see Mongodb being used as the primary datastore but after 2 years I’m more than sure that this was definitely the datastore for us. I’ve outlined areas that confused me when i first encountered them only to learn that they were actually invaluable features that were available to me.

“No migrations” is that all you have?
The advantages of schemaless documents are priceless. Not having to migrate is just one of the perks. Our schemas were largely in the form of Orders (having many) Shipments (going_from) ShipPoint (to) ShipPoint

We rarely used most of these entities without the other and it just served us extremely well to manage them as self contained documents embedding the other.

Mongodb writes are fire and forget? WTF?
This doesn’t always have to be the case, though it significantly contributes to Mongodb’s fast writes. Mongodb’s write concerns configurations allow you to configure the precise level of persistence that needs to be achieved to call it a successful write. So if the write fails you know its failed. The fact that you could know if your write has migrated to replicas or has been journaled is a pretty neat feature.

How can the default writes be fire and forget?
It just made sense, given all the information to configure it the way you prefer I would always go with this approach. We add a lot of Notes to each shipment as it gets reviewed at different levels by the sales, accounts and other teams. These notes generally serve as a reminder or a single line indicating that its been viewed – though it doesn’t critically affect the business workflows of the application. Its just seemed logical that these were fire and forget operations and could be stored as quickly as possible.

Another place where this is extremely handy for us is during Tracking. We track several hundreds of shipments each day logging every tracking status, location and time the shipment has been, while in transit. This information is handy for customers to keep an eye on where their shipment has reached. Chances are when fetching this information some of the information is not saved the first time – but we expect that it would be obtained during a second fetch 30 minutes later. The default write concern works brilliantly then.

Read locks and write locks – don’t they slow you down
They do but since most of the stuff is memory mapped this doesn’t affect you in a major way. However, I did notice people always working the primary of a replicaSet and never querying the secondaries for fear of inconsistent data. I think if you have sufficient memory your replication lag would be pretty small and besides if you don’t need the data to be consistent every instant querying secondary is a sensible option to reduce the load off your primary. Which brings me to the PrimaryPreferred Read preference. This allows you to query a secondary in your replica set when your primary is not available. It’s a fairly safe choice in my opinion.

We began querying secondaries for ShipPoints which didn’t change that often.

All the memory usage is killing me!
This is one of the things I that took me time to accept. Mongodb expects that your working set fits into RAM along with the indexes for your database. Your working set is the data that is frequently queried and updated. Since mongodb works with memory maps most of your working set data is mapped to the memory. When this data is not available in memory a page fault occurs and your data needs to be fetched from disk. This results in a performance penalty but as long as you have some swap space you can safely load the data back in.

While our working set was fairly small our reporting application needed access to the entire shipment records to generate reports. This resulted in Mongo running out of memory and spitting OperationFailure errors on a regular basis.

Our initial approach was naive and we started using Redis(which is another datastore thats pure gold.) to store snapshots of information but soon realised we could just use Mongodb to make it work.

So can I never generate reports without having my dataset fit in memory?
Rollups to the rescue. Rollups are pre-aggregated statistical information that help you speed up your aggregation process. This makes life significantly easier as you query for short time ranges to generate micro-reports.

Here is a simplified snapshot on how we generated daily and monthly aggregates with mapreduce.

So you mean this can’t be realtime?
Yes it can – through atomic updates. Just like we generated rollups to speed up reporting we can generate pre-aggregated snapshots of aggregated information like this.

Once this is in place you can update your aggregates by simply incrementing the right counter with something like

I haven’t even touched upon the replication and sharding features that Mongodb offers which I will reserve for another post. To summarise I feel Mongodb is awesome and is a lot like the kid in class who you dismissed because your friends thought he was weird – till you got to know him.

Disclaimer: I don’t claim to be an authority on Mongodb and everything that I have written about is stuff that I’ve learnt while working with Mongodb. I recommend reading the documentation and going through the talks available on the Mongodb website.

SIMPLE QUICK SSH TUNNELING TO EXPOSE YOUR WEB APP

I’ve been a localtunnel user for quite some time now and I really love the fact that its a free service, quick to install and easy to expose your development app to the world. There are quite a few worthy alternatives now with showoff.io and pagekite that pretty much do the same thing.

But at times it gets annoying (especially when in the middle of other work) that I’m unable to access localtunnel because its down or I outran the free usage limit for Pagekite.

I generally end up using localtunnel when I have to wait for an IPN from Paypal or Authorize.net (relay response) while working on my development environment. So here is a quick way to roll out a basic version of the service for your own needs.

Before we move to the “how to” here is a quick intro to ssh tunneling

Now, I assume that you have a staging server (or some server you have ssh access to).

The following terminal command does pretty much the same thing we do with the Ruby code that follows:

The -R indicates that you’ll be forwarding your local app running on port 3000 to the remote port 9999 on the remote host abc.example.com so that everyone can access it. Now your application running on localhost:3000 is accessible at abc.example.com:9999

We do the same thing now using the ruby net-ssh library. The following code snippet is customised to my defaults but its simple enough to change those settings.

#! /usr/bin/env 
ruby require 'rubygems' 
require 'net/ssh' 
require "net/ssh/gateway" 


# Terminal equivalent 
# #ssh -R0.0.0.0:9999:localhost:3000 sid@abc.idyllic-software.com 
puts "Enter the remote server name you wish to forward your local port to: (staging.example.com)" 
remote_host = gets.chomp 
puts "Enter the remote port you wish your local app to be available on - on the remote server (ex: 9999)" 
remote_port = gets.chomp 
puts "Enter remote server user to ssh with" 
remote_user = gets.chomp 
puts "Enter local port to forward" 
local_port = gets.chomp 

remote_host = "abc.idyllic-software.com" if remote_host.empty? || remote_host.nil? 
remote_port = 9999 if remote_port.empty? || remote_port.nil? 
remote_user = "sid" if remote_user.empty? || remote_user.nil? 
local_port = 3000 if local_port.empty? || local_port.nil? 
gateway  = Net::SSH::Gateway.new(remote_host, remote_user) 

puts "Forwarding 127.0.0.1:#{local_port} to #{remote_host}:#{remote_port}" 
#remote_host = "abc.idyllic-software.com" 
Net::SSH.start(remote_host, remote_user) do |ssh| 
   puts "Connecting..." 
   #puts ssh.inspect 
      ssh.logger.sev_threshold = Logger::Severity::DEBUG 
      #remote forward from remote 127.0.0.1:3000 to 0.0.0.0:9999 established 
      ssh.forward.remote(local_port, '127.0.0.1', remote_port, remote_host) 
      ssh.loop { true } 
end

Hope this helps

Some CarrierWave Tips n Tricks

Finally wait is over I mean after nearly a rough patch of virtually no post for an year I back with a post againAs the title suggest it with regards to some tips and tricks that I manage to pull on carrierwave when working on a project

A) Upload Simultaneously to File and S3

I know there are tons of way to do this like upload the file first to server and then using the background process upload it to S3 blah blah .
Even directly upload to S3 is also an options since AMAZON S3 allow CORS

Well all this work great but I kind of not want to do them perhaps, may try some other approach
(Mind you nothing against the other approach mention thus after or already exists)

So what is my approach well basically nothing all you have to do customize your uploader a bit (not heavy customization though )

Consider this is your standard uploader look like  with default storage as :file

class MyUploader < CarrierWave::Uploader::Base
  storage :file
  ... blah ...
  ... blah ... 
  ... blah ... 
end

Now to simultaneously upload it to S3 all you have to do is place this in your uploader.

 version :s3_version do
     storage :fog
     def full_file_name
       super.sub(/s3_version_/,'')
     end
   end

So your uploader finally look like this

class MyUploader < CarrierWave::Uploader::Base
   storage :file
    ... blah ...
    ... blah ...

   version :s3_version do
     storage :fog
     def full_file_name
       super.sub(/s3_version_/,'')
     end
   end
end

That it define a version :s3_version (virtually any name you want) and then use storage as :fog in it (make sure to change the REGULAR EXPRESSION incase you chose a different version name other then s3_version)

Note : Also make sure to add the FOG setting for Carrierwave (S3 is just taken as an example it would work it all storage that fog provide)

Now most of you would argue that if say S3 isn’t responding/ something goes wrong in S3 over net, then your local storage goes for a toss and I agree completely with that to so I urge viewer to please take a note of this . Well for me I happy taking that risk on my part . 🙂

B). Delete the record but don’t delete the attachment

Well not all would agree with me on this but there are time when we been ask to delete the uploader record but not delete the file stored 🙂

There could be many reason like in my case there was a requirement that we need to retain the original copy of the client registered agreement even though the client not longer exists

As said there exists even a dozen tons of ways to achieve this as well but let me show you how I manage to pull off this.

All I did is I overwrite the method remove!  in my uploader and I also define a attr_accessor against my model (i.e attr_accessor :keep_file on my model)

Enough said time for code

class MyUploader < CarrierWave::Uploader::Base 
  def remove!
     unless model.keep_file
       super
     end
  end
end

Now to secure the storage file and only delete the record all I did is set  :keep_file => true and rest is taken care.

What I mean is record are destroyed but not the file from the storage system with this

There is another approach but I now tried and tested it that one would to overwrite the remove? 
method  something like this (Note I haven’t tested this )

class MyUploader < CarrierWave::Uploader::Base 
  def remove?
    false
  end
end

C). Custom the CarrierWave Error Message

Ever since I question/answered my own question I’m seeing that I’m getting a lot of traction on it . What I have asked for is –

How to change the Carrierwave error message and set your own ?.

Well clearly the answer for that is define a key/value in your en.yml for desired carrierwave error message and that it

e.g Let say Carrierwave default message on extensions whilelists valdations is this

You are not allowed to upload %{extension} files, allowed types: %{allowed_types}

Now want is to change reason (reason with held with me 🙂 ) . So all I have to do is this

in my en.yml define something like this

en:
  errors:
    messages:
       extension_white_list_error: 'My Custom Message'

A full list of carrierwave key/value pair can be found over here

That it guys hope you liked the tricks and found it useful

Thank You

 

Alternative authentication process for your rails application using Omniauth and Devise

We are living in an era of web applications. Every day you come across new and innovative web applications. This brings to light a cut throat competition between them. Every application strives to attract as many users as possible. In this aspect it is very important to analyze – “Why will the users use your website and will continue to use it?”

Your application idea needs to be strong and path-breaking. However, having said so, it should also be simple to deal with real world problems. Although it seems easy to talk about, yet it is really difficult to find simple solutions for complex problems. Apart from the idea, technical ease of usage, online presence and solid marketing initiative for your app also matters when it comes to attracting users. How you implement your idea technically has got a big role to play in this.

What is Authentication process ?

In most of the user specific web application, the first step is Sign Up/Login process. Generally, the user doesn’t have access to all the services offered by the application unless he/she registers with the application. The registration is not necessary for all applications, but it is present in most of the cases.

Simplifying authentication!!

Entering your email address and a password into every website that you use can be time consuming. Worst case is remembering your login credentials for different web apps you use.

Here is a solution to ease your problem. OAuth allows you to authenticate against an OAuth provider. Rather than providing your username/email id and password to yet another site, you directly authenticate against a central provider. The central provider then supplies tokens for the different applications to read and/or write the user’s data on the application.

How OAuth works ??

Authentication Process

Figure 1

You need to register your application with service providers like Facebook/Twitter/Github in order to authenticate against these applications. Once you register, you’re given a unique key to identify your application and a secret passphrase, which is actually a hash. Neither of these should be shared. When your application makes a request to an OAuth provider, it will send these two parameters, along as part of the request so that the provider knows which application is connecting.

Figure 1 depicts the complete process elaborately. A user initiates the authentication process by clicking on the concerned service provider’s link that he wants to authenticate himself with. Your application then sends that unique key and that secret passphrase (given to you by the Service Provider when you register your application), and begins the authentication process by requesting a token ‘A‘. This token will be used as an identifier for this particular authentication request cycle. The provider then grants you this token and sends it back to your application. Your application then redirects the user to the provider ‘B‘ in order to gain the user’s permission for this application to access its data. When signing in with Twitter, your users would see something like following figure :

Twitter Aunthentication

Figure 2

Here user can either choose to authorize your application or cancel the process. If he chooses to authorize, your application has access to their data. If he clicks cancel, user will be redirected back to your application without granting the access to his data.

When user clicks on Authorise App ‘C‘,’D‘, he is then redirected to your application from the provider, with two parameters: an oauth_token and an oauth_verifier. The oauth_token is the request token you were granted at the beginning, and the oauth_verifier is a verifier of that token.

OmniAuth then uses these two pieces of information to gain an access tokenE‘,’F‘, which will allow your application to access the user’s data. There’s also additional data, such as the user’s attributes, that gets sent back here. The provider determines the extent of this additional data.

What is Omniauth and Devise?

Omniauth, as mentioned on its Github page, is a library that standardizes multi-provider authentication for web applications. This means that you can make your app authenticate through Twitter, Facebook, LinkedIn, Google Apps, GitHub, Foursquare, and more and have complete control from there. This page lists several gems available for different providers.

Railscasts for Omniauth : Part 1
Railscasts for Omniauth : Part 2

Devise is based on Warden. It is a complete authentication solution. Devise is a Rails Engine and it covers controllers and views as well. It allows you to have multiple roles (or models/scopes) signed in at the same time. Devise is modular and currently consists of eleven modules. Each of these modules provides a different aspect of authentication. For example, module Validatable provides validations of email and password.

Railscasts for Devise

Code implementation

To have a look at code implementation, we will consider “Twitter authentication” process. Add following gems to your gemfile :

gem 'omniauth'
gem 'omniauth-twitter'

Registering your application on twitter

You need to register your app on twitter before your users can use it to log in to your application. Registering with twitter gives you a consumer key and consumer secret for your application. These are used by twitter to identify your application when a user tries to use this service. This looks something like following image:
OAuth Setting

Setting up OmniAuth configuration

Callback URL in the above image is the URL to which twitter will send the response of the authentication process. This URL points back to your application. Next, tell OmniAuth about this provider. For a Rails app, your config/initializers/omniauth.rb file should look like this:

Rails.application.config.middleware.use OmniAuth::Builder do
provider :twitter, "CONSUMER_KEY", "CONSUMER_SECRET"
end

Handling the response from your provider

When a user clicks the twitter button, your application will begin the OAuth process. To use OmniAuth, you need only to redirect users to /auth/:provider, where (:provider) is the name of the strategy (for example, facebook or twitter). From there, OmniAuth will take over and take the user through the necessary steps to authenticate them with the chosen strategy. After successful completion of the process, twitter sends back the user’s information response like this :

{
...
"extra" =&gt; {
...
"user_hash" =&gt; {
"id" =&gt; "14506011"
"screen_name" =&gt; "ketandeshmukh"
"name" =&gt; "Ketan Deshmukh”,
...
}
}
}

This is a very stripped-down version of the response. This response contains following important values:

  • Twitter provided ID of the user
  • Twitter username
  • Display name

Twitter doesn’t provide us with the email-id of the user. Hence in your application, if the user has authorized himself using twitter, you can use his twitter display name instead of his email id. Or, for such users, you can ask their email separately. This depends on the logic of your application.
To process the callback response, add following line to your routes.rb file :

match '/auth/:provider/callback', to: 'sessions#create'

Let’s see how the controller and model view looks like. The “create” method of your “sessions” controller( in this case ) will look something like this :

class SessionsController &lt; ApplicationController
def create
@user = User.find_or_create_for_twitter(env["omniauth.auth"])
flash[:notice] = "Signed in with Twitter successfully."
sign_in_and_redirect @user, :event =&gt; :authentication
end
end

When a request is made to this action, the details for the user are accessible in the env["omniauth.auth"] key. env is the Rack environment of this request, which contains other helpful things such as the path of the request.

You then pass these details to a currently undefined method called find_or_create_for_twitter. It will deal with finding a User record for this information from Twitter, or creating one if it doesn’t already exist.

You then set a flash[:notice] telling the user they’ve signed in and use the Devise-provided sign_in_and_redirect method to redirect your user to the root_path of your application.

To make this action work, you need to define find_or_create_for_twitter in your User model. You can do this by using the code from the following listing. In your user model, i.e, in app/models/user.rb file :

def self.find_or_create_for_twitter(response)
data = response['extra']['user_hash']
if user = User.find_by_twitter_id(data["id"])         # Find user
user
else                                         # Create a user with
# a stub password.
user = User.new(:email =&gt; "twitter+#{data["id"]}@example.com",
:password =&gt; Devise.friendly_token[0,20])
user.twitter_id = data["id"]
user.twitter_screen_name = data["screen_name"]
user.twitter_display_name = data["display_name"]
user.confirm!
user
end
end

That’s it!!! You just need to write a migration for adding twitter_id, twitter_screen_name, twitter_display_name to your User model.

Conclusion

Here is a Summary of what we just did for your quick reference:

Now users are able to sign up and sign in by clicking the Twitter icon in your application rather than providing you with their email and password. The first time a user clicks this icon, they’ll be redirected off to Twitter, which will ask them to authorize your application to access their data. If they choose Allow, they will be redirected back to your application. With the parameters sent back from the final request, you’ll attempt to find a User record matching their Twitter ID or, if there isn’t one, you will create one instead. Then you’ll sign them in.

After that, when the user attempts to sign in using the Twitter icon, they’ll still be redirected back to Twitter, but this time Twitter won’t ask them for authorization again. Instead, Twitter will instantly redirect them back to your application; the whole process will seem pretty smooth, albeit with the delay that can normally be expected from doing two HTTP requests. Similarly you can use available OmniAuth strategies to add other providers like Facebook, Github to your application.

What are you waiting for now??? Start the rails server and test our Alternate authentication system!!

Subscribe To Our Blog

Get access to proven marketing ideas, latest trends and best practices.

Next up home

Contact

Lets build cool stuff

Share your contact information & we will get in touch!

I want (Tell us more about your dream project)