| Age | Commit message (Collapse) | Author |
|
== New features
* cmd/emaildecode: CLI to decode email body to plain text
The emaildecode accept file as input. If the email header contains
content-transfer-encoding with value quoted-printable or base64, it will
decode the message body and print it to stdout as plain text.
== Bug fixes
* lib/memfs: another fix for refresh
In previous commit we use wrong condition when handling directory
"." as Root.
== Enhancements
* lib/email: allow message that end lines with LF only
Although, a message from network must end with CRLF, a message from
(another) client may have been sanitized and end with LF only.
* lib/email: decode the message body based on content-transfer-encoding
After the header and body has been parsed, if the header contains
Content-Transfer-Encoding, we decode the body into its local formats.
Currently supported encoding is "quoted-printable" and "base64".
== Others
* lib/email: export the Header fields
By exporting the fields, this allow the caller to filter or manage
the field manually.
* _doc: add partial note and summary for RFC 2183
The RFC 2183 is define Content-Disposition header field in the
internet message.
* lib/ini: mention that marshaling []byte does not supported
Due to "byte" is considered as "uint8" during reflection, we cannot
tell whether the value is slice of byte of slice of number with type
uint8.
|
|
Due to "byte" is considered as "uint8" during reflection, we cannot tell
whether the value is slice of byte of slice of number with type uint8.
|
|
The RFC 2183 is define Content-Disposition header field in the internet
message.
|
|
The emaildecode accept file as input.
If the email header contains content-transfer-encoding with value
quoted-printable or base64, it will decode the message body and print it
to stdout as plain text.
|
|
By exporting the field, this allow the caller to filter or manage the
Header Fields manually.
|
|
After the header and body has been parsed, if the header contains
Content-Transfer-Encoding, we decode the body into its local formats.
Currently supported encoding is "quoted-printable" and "base64".
|
|
Although, a message from network must end with CRLF, a message from
(another) client may have been sanitized and end with LF only.
|
|
In previous commit we use wrong condition when handling directory "." as
Root.
|
|
=== Bug fix
* lib/memfs: sanitize the Root directory to fix refresh
In [MemFS.refresh], if the requested url is "/file1" and [Options.Root]
is ".", the path during refresh become "file1" and if passed to
[filepath.Dir] it will return ".". This cause the loop on refresh
never end because there is no PathNodes equal with ".".
|
|
In [MemFS.refresh], if the requested url is "/file1" and [Options.Root]
is ".", the path during refresh become "file1" and if passed to
[filepath.Dir] it will return ".".
This cause the loop on refresh never end because there is no PathNodes
equal with ".".
|
|
|
|
=== Enhancements
* lib/http: add request type HTML
The RequestTypeHTML define the content type "text/html".
* lib/path: add method Path to Route
Unlike String method that may return the key’s name in returned
path, the Path method return the path with all the keys has been
substituted with values, even if its empty.
|
|
Replace link with master branch to main branch.
|
|
The RequestTypeHTML define the content type "text/html".
|
|
Unlike String method that may return the key's name in returned path,
the Path method return the path with all the keys has been substituted
with values, even if its empty.
|
|
=== Breaking changes
* lib/http: refactoring "multipart/form-data" parameters in ClientRequest
Previously, ClientRequest with type RequestTypeMultipartForm pass the
type "map[string][]byte" in Params.
This type hold the file upload, where key is the file name and []byte
is content of file.
Unfortunately, this model does not correct because a
"multipart/form-data" can contains different field name and file name,
for example
--boundary
Content-Disposition: form-data; name="field0"; filename="file0"
Content-Type: application/octet-stream
<Content of file0>
This changes fix this by changing the parameter type for
RequestTypeMultipartForm to [*multipart.Form], which affect several
functions including [Client.PutFormData] and [GenerateFormData].
=== Bug fixes
* lib/dns: fix packing and unpacking OPT record
The RDATA in OPT records can contains zero or more options.
Previously, we only handle unpacking and packing one option, now we
handle multiple options.
* telegram/bot: fix Webhook URL registration
Using [path.Join] cause "https://domain" become "https:/domain"
which is not a valid URL. This bug caused by refactoring in b89afa24f.
=== Enhancements
* lib/memfs: set embed file mode to print as octal
Using octal in mode make the embedded code more readable, for
example mode with permission "0o644" much more readable than 420".
* telegram/bot: register GET endpoint to test webhook
The call to get "GET <Webhook.URL.Path>/<Token>" will return HTTP
status 200 with JSON body '{"code":200,"message":"OK"}'.
This endpoint is to check if the bot server is really running.
* lib/http: allow all HTTP method to generate HTTP request with body
Although the RFC 7231 says that no special defined meaning for a
payload in GET, some implementation of HTTP API sometimes use GET with
content type "application/x-www-form-urlencoded".
* lib/http: add new function [CreateMultipartFileHeader]
The CreateMultipartFileHeader help creating [multipart.FileHeader]
from raw bytes, that can be assigned to [*multipart.Form].
|
|
|
|
|
|
Previously, ClientRequest with type RequestTypeMultipartForm pass the
type "map[string][]byte" in Params.
This type hold the file upload, where key is the file name and []byte is
content of file.
Unfortunately, this model does not correct because a
"multipart/form-data" can contains different field name and file name,
for example
--boundary
Content-Disposition: form-data; name="field0"; filename="file0"
Content-Type: application/octet-stream
<Content of file0>
This changes fix this by changing the parameter type for
RequestTypeMultipartForm to [*multipart.Form], which affect several
functions including [Client.PutFormData] and [GenerateFormData].
We also add new function [CreateMultipartFileHeader] to help creating
[multipart.FileHeader] from raw bytes, that can be assigned to
[*multipart.Form].
|
|
Althought the RFC 7231 says that no special defined meaning for a payload
in GET, some implementation of HTTP API sometimes use GET with content
type "application/x-www-form-urlencoded".
|
|
The call to get "GET <Webhook.URL.Path>/<Token>" will return HTTP status
200 with JSON body '{"code":200,"message":"OK"}'.
This endpoint is to check if the bot server is really running.
|
|
|
|
Using [path.Join] cause "https://domain" become "https:/domain" which
is not a valid URL.
|
|
|
|
The record type ANY contains multiple A, AAAA or any known resource
records that we already support.
|
|
By returning error errInvalidMessage, the caller can check whether the
issue is in connection or in the message itself.
If the issue is not in the message, the caller needs to re-create the
connection.
|
|
The RDATA in OPT records can contains zero or _more_ options.
Previously, we only handle unpacking and packing one option, now we
handle multiple options.
|
|
Using octal in mode make the code more readable, for example mode with
permission "0o644" much more readable than 420.
|
|
This is the first release after we move the repository to SourceHut
under different name: "pakakeh.go".
There are several reasons for moving and naming.
First, related to the name of package.
We accidentally name the package with "share" a common word in English
that does not reflect the content of repository.
By moving to other repository, we can rename it to better and unique
name, in this "pakakeh.go".
Pakakeh is Minang word for tools, and ".go" suffix indicate that the
repository related to Go programming language.
Second, supporting open source.
The new repository is hosted under sourcehut.org, the founder is known
to support open source, and all their services are licensed under AGPL,
unlike GitHub that are closed sources.
Third, regarding GitHub CoPilot.
https://docs.github.com/en/site-policy/github-terms/github-terms-of-service#4-license-grant-to-us[The
GitHub Terms of Service],
allow any public content that are hosted there granted them to parse the
content.
On one side, GitHub helps and flourish the open source, but on another
side have an issues
https://githubcopilotinvestigation.com[issues]
regarding scraping the copyleft license.
=== Breaking changes
Since we are moving to new repository, we fix all linter warnings and
inconsistencies that we cannot changes on previous module.
Breaking changes related to naming,
* api/slack: [Message.IconUrl] become [Message.IconURL]
* lib/dns: DefaultSoaMinumumTtl become DefaultSoaMinimumTTL
* lib/email: [Message.SetBodyHtml] become [Message.SetBodyHTML]
* lib/http: [Client.GenerateHttpRequest] become
[Client.GenerateHTTPRequest]
* lib/http: [ClientOptions.ServerUrl] become [ClientOptions.ServerURL]
* lib/http: [EndpointRequest.HttpWriter] become
[EndpointRequest.HTTPWriter]
* lib/http: [EndpointRequest.HttpRequest] become
[EndpointRequest.HTTPRequest]
* lib/http: [ServerOptions.EnableIndexHtml] become
[ServerOptions.EnableIndexHTML]
* lib/http: [SSEConn.HttpRequest] become [SSEConn.HTTPRequest]
* lib/smtp: [ClientOptions.ServerUrl] become [ClientOptions.ServerURL]
* lib/ssh/sftp: [FileAttrs.SetUid] become [FileAttrs.SetUID]
* lib/ssh/sftp: [FileAttrs.Uid] become [FileAttrs.UID]
Changes on packages,
* lib/sql: remove deprecated Row type
The Row type has been replaced with Meta type with more flexibility
and features for generating type-safe SQL DML.
* lib/memfs: remove deprecated Merge function
The Merge function has been replaced with [memfs.MemFS.Merge] for
better API.
* lib: move package "net/html" to "lib/html"
Putting "html" under "net" package make no sense.
Another reason is to make the package flat under "lib/" directory.
* lib: move package "ssh/config" to "lib/sshconfig"
Previously the "ssh/config" is used by the parent package "ssh" and
"ssh/sftp" which is break the rule of package layer (the top package
should be imported by sub package, not the other way around).
* lib/http: refactor of RegisterEndpoint and RegisterSSE to non-pointer
Once the endpoint registered, the caller should not able to changes
any values on endpoint again.
* lib/http: refactoring NewServer and NewClient
The NewServer and NewClient now accept non-pointer options, so the
caller unable to modify the options once the server or client has
been created.
* lib/http: refactor Client methods to use struct ClientRequest
Instead of three parameters, the Client methods now accept single
struct [ClientRequest].
* lib/http: refactoring Client methods to return struct ClientResponse
Instead of returning three variables, [http.Response], []byte, and
error, we combine the [http.Response] and []byte into single struct:
ClientResponse.
* lib/http: refactoring type of RequestMethod from int to string
The reason is to make storing or encoding the RequestMethod value
readable from user point of view instead of number, 0, 1, 2, etc.
* lib/http: refactor type of RequestType from int to string
The reason is to make storing or encoding the RequestType value
readable from human point of view instead of number, 0, 1, 2, etc.
* lib/http: refactoring type of ResponseType from int to string
The reason is to make storing or encoding the value readable
from human point of view instead of number, 0, 1, 2, etc.
* lib/http: refactoring FSHandler type to return [*memfs.Node]
Changing FSHandler type to return [*memfs.Node], allow the handler to
redirect or return custom node.
One of the use case is when service Single Page Application (SPA),
where route is handled by JavaScript.
For example, when user requested "/dashboard" but dashboard directory
does not exist, one can write the following handler to return
"/index.html",
node, _ = memfs.Get(`/index.html`)
return node
* lib/dns: refactor [Message.Unpack] to [UnpackMessage]
The previous API for Message is a little bit weird.
Its provides creating Message manually, but expose the method
[UnpackHeaderQuestion], meanwhile the field packet itself is
unexported.
In order to make it more clear we refactor [Message.Unpack] to
function [UnpackMessage] that accept raw DNS packet.
=== New features
* test/httptest: new helper for testing HTTP server handler
The Simulate function simulate HTTP server handler by generating
[http.Request] from fields in [SimulateRequest]; and then call
[http.HandlerFunc].
The HTTP response from serve along with its raw body and original HTTP
request then returned in [*SimulateResult].
* lib/dns: implements RFC 9460 for SVCB RR and HTTPS RR
The dns package now support packing and unpacking DNS with record type
64 (SVCB) and 65 (HTTPS).
* cmd/ansua: command line interface to help tracking time
Usage,
ansua <duration> [ "<command>" ]
ansua execute a timer on defined duration and optionally run a command
when timer finished.
When ansua timer is running, one can pause the timer by pressing p+Enter,
and resume it by pressing r+Enter, or stopping it using CTRL+c.
=== Bug fixes
* lib/memfs: trim trailing slash ("/") in the path of Get method
The MemFS always store directory without slash.
If caller request a directory node with slash, it will always return
nil.
* lib/dns: use ParseUint to parse escaped octet in "\NNN" format
Previously, we use ParseInt to parse escaped octet "\NNN", but using
this method only allow decimal from 0 to 127, while the specification
allow 0 to 255.
=== Enhancements
* lib/http: handle CORS independently
Previously, if [CORSOptions.AllowOrigins] not found we return it
immediately without checking request "Access-Control-Request-Method",
"Access-Control-Request-Headers", and other CORS options.
This changes check each of them, a missing allow origins does not
means empty allowed method, headers, MaxAge, or credentials.
* lib/bytes: add parameter networkByteOrder to ParseHexDump
If networkByteOrder is true, the ParseHexDump read each hex string
in network byte order or as order defined in text.
While at it, fix reading and parsing single byte hex.
* cmd/httpdfs: set default include options to empty
By default httpdfs now serve all files under base directory.
|
|
If the length of packet is not as expected, return immendiately with
an error.
This is to prevent panic when unpacking the response message.
|
|
Somehow the test passed on my main machine, but failed on my laptop.
Weird.
|
|
By default httpdfs now serve all files under base directory.
|
|
While at it, update the script to help testing building package
using local repository.
|
|
Usage,
ansua <duration> [ "<command>" ]
ansua execute a timer on defined duration and optionally run a command
when timer finished.
When ansua timer is running, one can pause the timer by pressing p+Enter,
and resume it by pressing r+Enter, or stopping it using CTRL+c.
|
|
The previous API for Message is a little bit weird.
Its provides creating Message manually, but expose the method
[UnpackHeaderQuestion], meanwhile the field packet itself is unexported.
In order to make it more clear we refactor [Message.Unpack] to
function [UnpackMessage] that accept raw DNS packet.
|
|
|
|
The RFC 9460 is specification for DNS record 64 (Service Binding, or
SVCB) and 65 (HTTPS, SVCB compliant).
|
|
If networkByteOrder is true, the ParseHexDump read each hex string
in network byte order or as order defined in text.
While at it, fix reading and parsing single byte hex.
|
|
Previously, we use ParseInt to parse escaped octet "\NNN", but using
this method only allow decimal from 0 to 127, while the specification
allow 0 to 255.
|
|
The second return valus is a boolean to indicate that node is directory,
which can also retrieved from Node using method IsDir.
|
|
The test initialize Parser with seven delimiters, '\t', '=', '\n', ' ',
'"', '(', and ')' and then Read token from multi lines contents.
|
|
|
|
When we want to test a function or methods that does not interact
with DNS server, there is no need to run dummy DNS server.
|
|
|
|
The MemFS always store directory without slash.
If caller request a directory node with slash, it will always return nil.
|
|
|
|
Changing FSHandler type to return [*memfs.Node], allow the handler to
redirect or return custom node.
One of the use case is when service Single Page Application (SPA), where
route is handled by JavaScript.
For example, when user requested "/dashboard" but dashboard directory
does not exist, one can write the following handler to return
"/index.html",
{
node, _ = memfs.Get(`/index.html`)
return node
}
|
|
The reason is to make storing or encoding the value readable
from human point of view instead of number, 0, 1, 2, etc.
|
|
The reason is to make storing or encoding the RequestType value readable
from human point of view instead of number, 0, 1, 2, etc.
|
|
The reason is to make storing or encoding the RequestMethod value readable
from user point of view instead of number, 0, 1, 2, etc.
|