Saturday, May 18, 2024
HomeGolangQueue Your Approach To Scalability

Queue Your Approach To Scalability


Introduction
The very first thing I did after I began programming in Go was start porting my Home windows utilities courses and repair frameworks over to Linux. That is what I did after I moved from C++ to C#. Thank goodness, I quickly realized about Iron.IO and the companies they provided. Then it hit me, if I needed true scalability, I wanted to start out constructing employee duties that could possibly be queued to run wherever at any time. It was not about what number of machines I wanted, it was about how a lot compute time I wanted.

The liberty that comes with architecting an answer round internet companies and employee duties is refreshing. If I want 1,000 cases of a process to run, I can simply queue it up. I don’t want to fret about capability, sources, or some other IT associated points. If my service turns into an instantaneous hit in a single day, the structure is prepared, the capability is obtainable.

My cell climate utility Outcast is a main instance. I presently have a single scheduled process that runs in Iron.IO each 10 minutes. This process updates marine forecast areas for the USA and downloads and parses 472 internet pages from the NOAA web site. We’re about so as to add Canada and finally we need to transfer into Europe and Australia. At that time a single scheduled process isn’t a scalable or redundant structure for this course of.

Due to the Go Consumer from Iron.IO, I can construct a process that wakes up on a schedule and queues up as many marine forecast space employee duties as wanted. I can use this structure to course of every marine forecast space independently, in their very own employee process, offering unimaginable scalability and redundancy. The most effective half, I don’t have to consider {hardware} or IT associated capability points.

Create a Employee Job
Again in September I wrote a submit about constructing and importing an Iron.IO employee process utilizing Go:

https://www.ardanlabs.com/weblog/2013/09/running-go-programs-in-ironworker.html

This process simulated 60 seconds of labor and ran experiments to grasp among the capabilities of the employee process container. We’re going to use this employee process to reveal the right way to use the Go Consumer to queue a process. If you wish to comply with alongside, go forward and stroll by means of the submit and create the employee process.

I’m going to imagine you walked by means of the submit and created the employee known as “process” as depicted within the picture beneath:

Obtain The Go Consumer
Obtain the Go Consumer from Iron.IO:

go get github.com/iron-io/iron_go

Now navigate to the examples folder:

Screen Shot

The examples leverage the API that may be discovered right here:
http://dev.iron.io/employee/reference/api/

Not all of the API calls are represented in these examples, however from these examples the remainder of the API will be simply carried out.

On this submit we’re going to give attention to the duty API calls. These are API’s that you’ll almost definitely be capable of leverage in your individual packages and architectures.

Queue a Job
Open up the queue instance from the examples/duties folder. We’ll stroll by means of the extra vital points of the code.

With a purpose to queue a process with the Go shopper, we have to create this doc which can be posted with the request:

{
    “duties”: [
        {
            “code_name”: “MyWorker”,
            “timeout” : 60,
            “payload”: “{"x": "abc", "y": "def"}”
        }
    ]
}

Within the case of our employee process, the payload doc in Go ought to appear like this:

var payload = {&quot;duties&quot;:[<br />
{<br />
&nbsp; &quot;code_name&quot; : &quot;<b>task</b>&quot;,<br />
&nbsp; &quot;timeout&quot; : <b>120</b>,<br />
&nbsp; &quot;payload&quot; : &quot;&quot;<br />
}]}

Now let’s have a look at the code that can request our process to be queued. The very first thing we have to do is ready our undertaking id and token.

config := config.Config(“iron_worker”)
config.ProjectId = “your_project_id”
config.Token = “your_token”

As described within the submit from September, this info will be discovered inside our undertaking configuration:

Screen Shot

Now we are able to use the Go Consumer to construct the url and put together the payload for the request:

url := api.ActionEndpoint(config, “duties”)
postData := bytes.NewBufferString(payload)

Utilizing the url object, we are able to ship the request to Iron.IO and seize the response:

resp, err := url.Request(“POST”, postData)
defer resp.Physique.Shut()
if err != nil {
    log.Println(err)
    return
}

physique, err := ioutil.ReadAll(resp.Physique)
if err != nil {
    log.Println(err)
    return
}

We need to examine the response to verify all the pieces was profitable. That is the response we are going to get again:

{
    “msg”: “Queued up”,
    “duties”: [
        {
            “id”: “4eb1b471cddb136065000010”
        }
    ]
}

To unmarshal the consequence, we’d like these knowledge constructions:

kind (
    TaskResponse struct {
        Message string json:&quot;msg&quot;
        Duties   []Job json:&quot;duties&quot;
    }

    Job struct {
        Id string json:&quot;id&quot;
    }
)

Now let’s unmarshal the outcomes:

taskResponse := &TaskResponse{}
err = json.Unmarshal(physique, taskResponse)
if err != nil {
    log.Printf(“%vn”, err)
    return
}

If we need to use a map as a substitute to scale back the code base, we are able to do that:

outcomes := map[string]interface{}{}
err = json.Unmarshal(physique, &outcomes)
if err != nil {
    log.Printf(“%vn”, err)
    return
}

Once we run the instance code and all the pieces works, we must always see the next output:
If we navigate to the Iron.IO HUD, we must always see the duty was queued and accomplished efficiently:

Screen Shot

Conclusion
The Go shopper is doing plenty of the boilerplate work for us behind the scenes. We simply want to verify we now have all of the configuration parameters which are required. Queuing a process is without doubt one of the extra difficult API calls. Take a look at the opposite examples to see the right way to get info for the duties we queue and even get the logs.

Queuing a process like this offers you the flexibleness to schedule work on particular intervals or primarily based on occasions. There are plenty of use instances the place several types of internet requests may leverage queuing a process. Leveraging any such structure supplies a pleasant separation of issues with scalability and redundancy in-built. It retains our internet functions centered and optimized for dealing with person requests and pushes the asynchronous and background duties to a cloud surroundings designed and architected to deal with issues at scale.

As Outcast grows we are going to proceed to leverage all of the companies that Iron.IO and the cloud has to supply.  There may be plenty of knowledge that must be downloaded, processing after which delivered to customers by means of the cell utility. By constructing a scalable structure right this moment, we are able to deal with what occurs tomorrow.



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments