Reply to Reviews & Messages Generator

PDF Tools

Merge PDF Combine PDFs in the order you want with the easiest PDF merger available.

Split PDF Separate one page or a whole set for easy conversion into independent PDF files.

Sort and Delete Page PDF Rearranging PDF pages and deleting single pages is one of the features we provide.

Video

Promise and Generator support

SuperAgent's request is a "thenable" object that's compatible with JavaScript promises and the async/await syntax.

If you're using promises, do not call .end() or .pipe(). Any use of .then() or await disables all other ways of using the request.

Libraries like co or a web framework like koa can yield on any SuperAgent method:

Note that SuperAgent expects the global Promise object to be present. You'll need a polyfill to use promises in Internet Explorer or Node.js 0.10.

How is information transferred from the encoder to the decoder?

In NMT and image captioning the encoder creates a fixed-length encoding (a vector of real numbers) that encapsulates information about the input. This representation has several names:

  • embedding
  • latent vector
  • meaning vector
  • thought vector

Here is the key: the embedding becomes the initial state of the decoder RNN. Read that again. When the decoding process starts it has, in theory, all of the information that it needs to generate the target sequence.

Once you understand this the sky is the limit. Here are a few more examples of applications for conditional language models:

Training the model

Here are a few details about the training process. Since this is for demonstration purposes, we didn’t put a great deal of effort into tuning the hyperparameters.

  • Loss function: categorical_crossentropy
  • Learning rate: 0.001
  • Optimizer: RMSProp
  • Batch size: 128
  • Backend: Tensorflow v1.7.0
  • Platform: We used FloydHub (please don’t tell me you are still managing your own deep learning AWS instances!). Tesla K80 GPU with 4 cores.
  • Runtime: We trained the model for about 10 hours (5 epochs), during which time we were standing around like a couple of Rory Calhouns. The performance was still improving but we deemed it to be sufficient for our purposes.

HTML Response

To return a response with HTML directly from FastAPI, use HTMLResponse.

  • Import HTMLResponse.
  • Pass HTMLResponse as the parameter response_class of your path operation decorator.

Info

The parameter response_class will also be used to define the “media type” of the response.

In this case, the HTTP header Content-Type will be set to text/html.

And it will be documented as such in OpenAPI.

Return a Response¶

As seen in Return a Response directly, you can also override the response directly in your path operation, by returning it.

The same example from above, returning an HTMLResponse, could look like:

Warning

A Response returned directly by your path operation function won’t be documented in OpenAPI (for example, the Content-Type won’t be documented) and won’t be visible in the automatic interactive docs.

Info

Of course, the actual Content-Type header, status code, etc, will come from the Response object your returned.

Document in OpenAPI and override Response¶

If you want to override the response from inside of the function but at the same time document the “media type” in OpenAPI, you can use the response_class parameter AND return a Response object.

The response_class will then be used only to document the OpenAPI path operation, but your Response will be used as is.

Return an HTMLResponse directly¶

For example, it could be something like:

In this example, the function generate_html_response() already generates and returns a Response instead of returning the HTML in a str.

By returning the result of calling generate_html_response(), you are already returning a Response that will override the default FastAPI behavior.

But as you passed the HTMLResponse in the response_class too, FastAPI will know how to document it in OpenAPI and the interactive docs as HTML with text/html:

Retrying requests

When given the .retry() method, SuperAgent will automatically retry requests, if they fail in a way that is transient or could be due to a flaky Internet connection.

This method has two optional arguments: number of retries (default 1) and a callback. It calls callback(err, res) before each retry. The callback may return true/false to control whether the request should be retried (but the maximum number of retries is always applied).

Use .retry() only with requests that are idempotent (i.e. multiple requests reaching the server won't cause undesirable side effects like duplicate purchases).

All request methods are tried by default (which means if you do not want POST requests to be retried, you will need to pass a custom retry callback).

By default the following status codes are retried:

  • 408
  • 413
  • 429
  • 500
  • 502
  • 503
  • 504
  • 521
  • 522
  • 524

By default the following error codes are retried:

  • 'ETIMEDOUT'
  • 'ECONNRESET'
  • 'EADDRINUSE'
  • 'ECONNREFUSED'
  • 'EPIPE'
  • 'ENOTFOUND'
  • 'ENETUNREACH'
  • 'EAI_AGAIN'

But how does Twilio see our app?

Our app needs a publicly accessible URL. To avoid having to deploy every time we make a change, we’ll use a nifty tool called ngrok to open a tunnel to our local machine.

Ngrok generates a custom forwarding URL that we will use to tell Twilio where to find our application. Download ngrok and run it in your terminal on port 5000

10 Extension Ideas to Improve the Model

Below are 10 ideas that may further improve the model that you could experiment with are:

  • Predict fewer than 1,000 characters as output for a given seed.
  • Remove all punctuation from the source text, and therefore from the models’ vocabulary.
  • Try a one hot encoded for the input sequences.
  • Train the model on padded sentences rather than random sequences of characters.
  • Increase the number of training epochs to 100 or many hundreds.
  • Add dropout to the visible input layer and consider tuning the dropout percentage.
  • Tune the batch size, try a batch size of 1 as a (very slow) baseline and larger sizes from there.
  • Add more memory units to the layers and/or more layers.
  • Experiment with scale factors (temperature) when interpreting the prediction probabilities.
  • Change the LSTM layers to be “stateful” to maintain state across batches.

Did you try any of these extensions? Share your results in the comments.

Working with Unicode code points

Code points mostly correspond to characters (with some exceptions). In most cases, modern programming languages will iterate over strings by their code points and give their length in code points (e.g. Python 3).

In JavaScript the length property actually gives the wrong value for historical reasons and backwards compatibility. The substring and substr methods and indexing also don't work as desired. However, iterating over a string steps one code point at a time. So you can get around these issues by converting to an array:

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *

Adblock
detector
Go up