Plugins

The core of gfal2 only implements a small part of the total functionality, mainly the ability to transfer between different protocols (via open, read, write, close).

Each protocol particularities are implemented by "plugins", which are dynamic libraries that implement a set of calls, and are loaded at run time by the core libraries of gfal2.

Each plugin is named as follows: gfal2-plugin-<protocol name>

gfal2-plugin-file

Implements local file access. The scheme is file:///. gfal2-util will automatically use this scheme when a plain path is used (i.e. /tmp/file), but this is done at gfal2-util level.

gfal2-plugin-gridftp

Adds GridFTP support via the Globus Toolkit. The scheme is gsiftp://. It supports third party copy and is, by far, the most used plugin on this list (nearly followed by srm). It also supports plain FTP, using the scheme ftp://.

For details of how third party copies happen in GridFTP/FTP, have a look at the Wikipedia page for the File eXchange Protocol.

When using gsiftp, the authentication mechanism is X509 user certificate or proxy.

When using ftp, the authentication mechanism is user and password. By default, it uses anonymous:anonymous, but this can be overridden via the the credential API (see USER and PASSWORD), or via configuration, with the variables USER and PASSWORD inside the FTP group.

Third party copies are possible with plain FTP, but normally the default configuration of most servers disable this for security reasons.

On the FTS3 repositories, you can see how to enable this for the Windows FTP server.

You can also check our Docker container pre-configured using vsftpd.

Note for someone trying to setup a server with FXP enabled: FXP is disabled by default for a reason. It can allow an attacker to use the FTP server to scan hosts normally not accessible outside the firewall. We advise to use GridFTP whenever possible, and use FTP only within the firewall, or properly allowing only a trusted subset of hosts to connect.

gfal2-plugin-http

This plugins builds on top of DaviX, and provides access to HTTP (http:// or https://), WebDAV (dav:// or davs://) and S3 (s3:// or s3s://). It also implements the fall-back logic to try third party copies first via pull, then push, and, finally, streaming through the client.

Depending on the protocol, the authentication will be done via X509 certificates (if the server asks for a client certificate), or via S3 keys.

To set the S3 keys, the configuration can be done in different ways:

Default
[S3]
ACCESS_KEY=
SECRET_KEY=
TOKEN=
REGION=
ALTERNATE=
For a specific host
[S3:s3.eu-west-1.amazonaws.com]
ACCESS_KEY=
SECRET_KEY=
TOKEN=
REGION=
ALTERNATE=

If ALTERNATE is True, then DaviX will generate the token for S3 path-based urls (bucket on the path).

QoS

The following QoS methods have been implemented as part of the CDMI-QoS support for the http plugin:

gfal_http_check_classes

This method accepts two arguments, url and type. The url being the url of a CDMI enabled storage endpoint and the type being either "dataobject" or "container". It returns a list of the available QoS classes for the specific resource type or NULL if the request fails.

gfal_http_check_file_qos

This method accepts one argument the url of a file/directory. The url must be the http exposed url of a file/dir on a CDMI enabled storage. It returns the QoS class of the given resource or NULL if none if the request fails.

gfal_http_check_qos_available_transitions

This method accepts one argument, the url for a specific QoS class on the target CDMI-enabled storage. It returns a list of the possible class transitions that the specific QoS class is allowed to do or NULL if the request fails.

gfal_http_change_object_qos

This method accepts one argument the url of a file/directory and a QoS class. The url must be the http exposed url of a file/dir on a CDMI enabled storage. It requests a transition of the QoS class of the file/directory to the new specified one. It returns 0 if the request succeeds or -1 if it fails.

gfal_http_check_target_qos

This method accepts one argument the url of a file/directory. The url must be the storage http exposed url of a file/dir on a CDMI enabled storage. It returns the target QoS class of a file. If a request to change the QoS of a file has been issued and this has not yet completed, the return of the value will be the target QoS class of the class. If the transition has already completed or the request fails, NULL will be returned.

gfal2-plugin-mock

This plugin implements a "mock" protocol, one that doesn't trigger any remote connection.

However, via the URL the user can trigger certain behaviors, like errors, timeouts, even segmentation faults.

This is useful for testing applications built on top of gfal2. For instance, we use it extensively for testing FTS3.

Supported arguments:

  • list For directories, a comma separated list of items as name:mode(octal):size(decimal)
  • size File size, in bytes
  • size_pre File size, in bytes, for stats previous to a copy
  • size_post File size, in bytes, for stats following a copy
  • checksum Checksum value
  • time Time that a copy will take. To be specified on the destination URL.
  • errno Trigger an error with this errno number
  • transfer_errno Trigger an error with this errno number during the transfer
  • staging_time Staging total time
  • staging_errno Fail the staging with this error number
  • release_errno Fail the release with this error number
  • signal Raise the signal specified as an integer

Also, if the string MOCK_LOAD_TIME_SIGNAL is found on any parameter for the current process (obtained reading /proc/self/cmdline), the following digits will be used to raise a signal at instantiation time.

Examples

Fail with a ENOENT

gfal-stat mock://host/path?errno=2

Stat a regular file with a size of 1000 bytes

gfal-stat mock://host/path?size=1000

Trigger a copy that will take 5 seconds

gfal-copy "mock://host/path?size=1000" "mock://host/path2?errno=2&size_pre=0&size_post=1000&time=5"

Trigger a segfault

gfal-ls "mock://host/path?signal=11"

Or

gfal-ls "mock://host/path/MOCK_LOAD_TIME_SIGNAL11"

gfal2-plugin-sftp

This plugin allows to operate over an SSH connection. It is experimental, and never used on production. Could be an interesting alternative for plain FTP access, since most servers will have a working SSH service anyway.

It allows both user/password, and RSA private key.

[SFTP PLUGIN]
## Defaults to current user name
# USER=
## Defaults to empty
# PASSWORD=
## Defaults to $HOME/.ssh/id_rsa
# PRIVKEY=
## Private key passphrase. Defaults to empty
# PASSPHRASE=

gfal2-plugin-srm

This plugin provides SRM support, and is, together with the GridFTP plugin, one of the most used in production.

SRM does not provide data transfer, only namespace operations. Similar to the LFC plugin, when a data operation is initiated towards an SRM endpoint, the plugin will resolve a replica (via a SRM GET), and then chain to the plugin that supports the protocol of the resolved replica.

SRM supports staging operations.

The SRMv2.2 spec is a good starting point to understand the protocol. However, there is a lot of trickery involved dealing with SRM endpoints, since different storages sometimes show different behavior.

Also, some parts of the spec is just ignored by storage implementations due to past issues. For instance, for the bring online operation, Castor ignores both desiredTotalRequestTime and desiredPinLifetime. dCache imposes a limit on the former, so even if requesting a 12h timeout, the request may still expire after 4h.

Additionally, for desiredTotalRequestTime when writing, dCache counts the duration from the Put to the PutDone. Others only consider this parameter for the actual Put request.

gfal2-plugin-xrootd

Last, but not least, we have the xrootd plugin. It builds on top of the Posix-like API provided by XrdPosixXrootd, and XrdCl for the asynchronous operations: transfers and staging.

This is the only plugin apart of the srm one that supports staging, but as today - November 2017 -, only CTA will support this.

results matching ""

    No results matching ""