Softtorch operators¤
Elementwise operators¤
softtorch.abs(x: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic') -> torch.Tensor
¤
Performs a soft version of torch.abs.
Arguments:
x: Input Array of any shape.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: Projection mode. "hard" returns the exact absolute value, otherwise uses "entropic", "pseudohuber", "euclidean", "cubic", or "quintic" relaxations. Defaults to "entropic".
Returns:
Result of applying soft elementwise absolute value to x.
softtorch.clamp(x: torch.Tensor, a: torch.Tensor, b: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'quartic', 'gated_entropic', 'gated_euclidean', 'gated_cubic', 'gated_quintic', 'gated_pseudohuber'] = 'entropic') -> torch.Tensor
¤
Performs a soft version of torch.clamp.
Arguments:
x: Input Array of any shape.a: Lower bound scalar.b: Upper bound scalar.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", appliestorch.clamp. Otherwise uses "entropic", "euclidean", "quartic", "gated_entropic", "gated_euclidean", "gated_cubic", "gated_quintic", or "gated_pseudohuber" relaxations. Defaults to "entropic".
Returns:
Result of applying soft elementwise clamping to x.
softtorch.heaviside(x: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic') -> torch.Tensor
¤
Performs a soft version of torch.heaviside(x,0.5).
Arguments:
x: Input Array of any shape.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", returns the exact Heaviside step. Otherwise uses "entropic", "euclidean", "cubic", or "quintic" relaxations. Defaults to "entropic".
Returns:
SoftBool of same shape as x (Array with values in [0, 1]), relaxing the
elementwise Heaviside step function.
softtorch.relu(x: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'quartic', 'gated_entropic', 'gated_euclidean', 'gated_cubic', 'gated_quintic', 'gated_pseudohuber'] = 'entropic') -> torch.Tensor
¤
Performs a soft version of torch.relu.
Arguments:
x: Input Array of any shape.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", appliestorch.relu. Otherwise uses "entropic", "euclidean", "quartic", "gated_entropic", "gated_euclidean", "gated_cubic", "gated_quintic", or "gated_pseudohuber" relaxations. Defaults to "entropic".
Returns:
Result of applying soft elementwise ReLU to x.
softtorch.round(x: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic', neighbor_radius: int = 5) -> torch.Tensor
¤
Performs a soft version of torch.round.
Arguments:
x: Input Array of any shape.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", appliesjnp.round. Otherwise uses a sigmoid-based relaxation based on the algorithm described in https://arxiv.org/pdf/2504.19026v1. This function thereby inherits the different sigmoid modes "entropic", "euclidean", "pseudohuber", "cubic", or "quintic". Defaults to "entropic".neighbor_radius: Number of neighbors on each side of the floor value to consider for the soft rounding. Defaults to 5.
Returns:
Result of applying soft elementwise rounding to x.
softtorch.sign(x: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic') -> torch.Tensor
¤
Performs a soft version of torch.sign.
Arguments:
x: Input Array of any shape.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", returnsjnp.sign. Otherwise smooths via "entropic", "euclidean", "cubic", or "quintic" relaxations. Defaults to "entropic".
Returns:
Result of applying soft elementwise sign to x.
Tensor-valued operators¤
softtorch.argmax(x: torch.Tensor, dim: int | None = None, keepdim: bool = False, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic') -> torch.Tensor
¤
Performs a soft version of torch.argmax
of x along the specified dim.
Arguments:
x: Input Array of shape (..., n, ...).dim: The dimension along which to compute the argmax. If None, the input Array is flattened before computing the argmax. Defaults to None.keepdim: If True, keeps the reduced dimension as a singleton {1}.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: Controls the type of softening:hard: Returns the result of torch.argmax with a one-hot encoding of the indices.entropic: Returns a softmax-based relaxation of the argmax.euclidean: Returns an L2-projection-based relaxation of the argmax.
Returns:
A SoftIndex of shape (..., {1}, ..., [n]) (positive Array which sums to 1 over the last dimension). Represents the probability of an index corresponding to the argmax along the specified dim.
Usage
This function can be used as a differentiable relaxation to
torch.argmax,
enabling backpropagation through index selection steps in neural networks or
optimization routines. However, note that the output is not a discrete index
but a SoftIndex, which is a distribution over indices.
Therefore, functions which operate on indices have to be adjusted accordingly
to accept a SoftIndex, see e.g. softtorch.max for an example of using
softtorch.take_along_dim to retrieve the soft maximum value via the
SoftIndex.
Difference to torch.nn.functional.softmax
Note that softtorch.argmax in entropic mode is not fully equivalent to
torch.nn.functional.softmax
because it moves the probability dimension into the last dim
(this is a convention in the SoftIndex data type).
softtorch.max(x: torch.Tensor, dim: int | None = None, keepdim: bool = False, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic') -> torch.Tensor | torch.return_types.max[torch.Tensor, torch.Tensor]
¤
Performs a soft version of torch.max
of x along the specified dim.
Implemented as softtorch.argmax followed by softtorch.take_along_dim, see
respective documentations for details.
Returns:
- If
dimis None (default): Scalar tensor representing the soft maximum of the flattenedx. - If
dimis specified: Namedtuple containing two fields:values: Tensor of shape (..., {1}, ...) representing the soft maximum ofxalong the specified dim.indices: SoftIndex of shape (..., {1}, ..., [n]) (positive Tensor which sums to 1 over the last dimension). Represents the soft indices of the maximum values.
softtorch.argmin(x: torch.Tensor, dim: int | None = None, keepdim: bool = False, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic') -> torch.Tensor
¤
Performs a soft version of torch.argmin
of x along the specified dim.
Implemented as softtorch.argmax on -x, see respective documentation for
details.
softtorch.min(x: torch.Tensor, dim: int | None = None, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic', keepdim: bool = False) -> torch.Tensor | torch.return_types.min[torch.Tensor, torch.Tensor]
¤
Performs a soft version of torch.min
of x along the specified dim.
Implemented as via softtorch.max on -x, see respective documentation for
details.
softtorch.median(x: torch.Tensor, dim: int | None = None, keepdim: bool = False, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic', fast: bool = True, max_iter: int = 1000) -> torch.Tensor | torch.return_types.median[torch.Tensor, torch.Tensor]
¤
Performs a soft version of torch.median
of x along the specified dim.
Importantly, we change the behavior of the median operation on even-length inputs to always return the average of the two middle elements (instead of returning lower one). This behavior is more similar to torch.quantile() with q=0.5.
Arguments:
x: Input Array of shape (..., n, ...).dim: The dim along which to compute the median. If None, the input Array is flattened before computing the median. Defaults to None.keepdim: If True, keeps the reduced dimension as a singleton {1}.softness: Softness of the function, should be larger than zero. Defaults to 1.modeandfast: These two arguments control the behavior of the median:mode="hard": Returns the result of torch.median with a one-hot encoding of the indices. On ties, it returns a uniform distribution over all median indices.fast=Falseandmode="entropic": Uses entropy-regularized optimal transport (implemented via Sinkhorn iterations). We adapt the approach in Differentiable Ranks and Sorting using Optimal Transport and Differentiable Top-k with Optimal Transport to the median operation by carefully adjusting the cost matrix and marginals. Intuition: There are three "anchors", the median is transported onto one anchor, and all the larger and smaller elements are transported to the other two anchors, respectively. Can be slow for largemax_iter.fast=Falseandmode="euclidean": Similar to entropic case, but using an L2-regularizer (implemented via projection onto Birkhoff polytope).fast=Trueandmode="entropic": This formulation a well-known soft median operation based on the interpretation of the median as the minimizer of absolute deviations. The softening is then achieved by replacing the argmax operator with a softmax. Note, that this also has close ties to the "SoftSort" operator from SoftSort: A Continuous Relaxation for the argsort Operator. Note: Fast mode introduces gradient discontinuities when elements inxare not unique, but is much faster.fast=Trueandmode="euclidean": Similar to entropic fast case, but using a euclidean unit-simplex projection instead of softmax.
max_iter: Maximum number of iterations for the Sinkhorn algorithm ifmodeis "entropic", or for the projection onto the Birkhoff polytope ifmodeis "euclidean". Unused iffast=True.
Returns:
- If
dimis None (default): Scalar tensor representing the soft median of the flattenedx. - If
dimis specified: Namedtuple containing two fields:values: Array of shape (..., {1}, ...) representing the soft median ofxalong the specified dim.indices: SoftIndex of shape (..., {1}, ..., [n]) (positive Array which sums to 1 over the last dimension). Represents the soft indices of the median values.
softtorch.argsort(x: torch.Tensor, dim: int | None = -1, descending: bool = False, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic', fast: bool = True, max_iter: int = 1000) -> torch.Tensor
¤
Performs a soft version of torch.argsort
of x along the specified dim.
Arguments:
x: Input Array of shape (..., n, ...).dim: The dim along which to compute the argsort operation. If None, uses the last dimension. Defaults to -1.descending: If True, sorts in descending order. Defaults to False (ascending).softness: Softness of the function, should be larger than zero. Defaults to 1.modeandfast: These two arguments control the type of softening:mode="hard": Returns the result of jnp.argsort with a one-hot encoding of the indices.fast=Falseandmode="entropic": Uses entropy-regularized optimal transport (implemented via Sinkhorn iterations) as in Differentiable Ranks and Sorting using Optimal Transport. Intuition: The sorted elements are selected by specifying n "anchors" and then transporting the ith-largest value to the ith-largest anchor. Can be slow for largemax_iter.fast=Falseandmode="euclidean": Similar to entropic case, but using an L2-regularizer (implemented via LBFGS projection onto Birkhoff polytope) as in Fast Differentiable Sorting and Ranking.fast=Trueandmode="entropic": Uses the "SoftSort" operator proposed in SoftSort: A Continuous Relaxation for the argsort Operator. This initializes the cost matrix based on the absolute difference ofxto the sorted values and then applies a single row normalization (instead of full Sinkhorn in OT). Note: Fast mode introduces gradient discontinuities when elements inxare not unique, but is much faster.fast=Trueandmode="euclidean": Similar to entropic fast case, but using a euclidean unit-simplex projection instead of softmax. To the best of our knowledge this variant is novel.
max_iter: Maximum number of iterations for the Sinkhorn algorithm ifmodeis "entropic", or for the projection onto the Birkhoff polytope ifmodeis "euclidean". Unused iffast=True.
Returns:
A SoftIndex of shape (..., n, ..., [n]) (positive Array which sums to 1 over the last dimension). The elements in (..., i, ..., [n]) represent a distribution over values in x for the ith smallest element along the specified dim.
softtorch.sort(x: torch.Tensor, dim: int | None = -1, descending: bool = False, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic', fast: bool = True, max_iter: int = 1000) -> torch.return_types.sort[torch.Tensor, torch.Tensor]
¤
Performs a soft version of torch.sort
of x along the specified dim.
Implemented as softtorch.argsort followed by softtorch.take_along_dim, see
respective documentations for details.
Returns:
- Namedtuple containing two fields:
values: Soft sorted values ofx, shape (..., n, ...).indices: SoftIndex of shape (..., n, ..., [n]) (positive Array which sums to 1 over the last dimension). Represents the soft indices of the sorted values.
softtorch.topk(x: torch.Tensor, k: int, dim: int | None = None, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic', fast: bool = True, max_iter: int = 1000) -> torch.return_types.topk[torch.Tensor, torch.Tensor]
¤
Performs a soft version of torch.topk
Arguments:
x: Input Array of shape (..., n, ...).k: The number of top elements to select.dim: The dim along which to compute the topk operation. If dim is None, the last dimension of the input is chosen. Defaults to None.softness: Softness of the function, should be larger than zero. Defaults to 1.modeandfast: These two arguments control the type of softening:mode="hard": Returns the result of torch.topk with a one-hot encoding of the indices.fast=Falseandmode="entropic": Uses entropy-regularized optimal transport (implemented via Sinkhorn iterations) as in Differentiable Top-k with Optimal Transport. Intuition: The top-k elements are selected by specifying k+1 "anchors" and then transporting the topk values to the top k anchors, and the remaining (n-k) values to the last anchor. Can be slow for largemax_iter.fast=Falseandmode="euclidean": Similar to entropic case, but using an L2-regularizer (implemented via projection onto Birkhoff polytope). This version combines the approaches in Fast Differentiable Sorting and Ranking (L2 regularizer for sorting) and Differentiable Top-k with Optimal Transport (entropic regularizer for top-k).fast=Trueandmode="entropic": Uses the "SoftSort" operator proposed in SoftSort: A Continuous Relaxation for the argsort Operator. This initializes the cost matrix based on the absolute difference ofxto the sorted values and then applies a single row normalization (instead of full Sinkhorn in OT). Because this is very fast we do a full soft argsort and then take the top-k elements. Note: Fast mode introduces gradient discontinuities when elements inxare not unique, but is much faster.fast=Trueandmode="euclidean": Similar to entropic fast case, but using a euclidean unit-simplex projection instead of softmax. To the best of our knowledge this variant is novel.
max_iter: Maximum number of iterations for the Sinkhorn algorithm ifmodeis "entropic", or for the projection onto the Birkhoff polytope ifmodeis "euclidean". Unused iffast=True.
Returns:
- Namedtuple containing two fields:
values: Top-k values ofx, shape (..., k, ...).indices: SoftIndex of shape (..., k, ..., [n]) (positive Array which sums to 1 over the last dimension). Represents the soft indices of the top-k values.
softtorch.ranking(x: torch.Tensor, dim: int | None = None, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean'] = 'entropic', fast: bool = True, max_iter: int = 1000, descending: bool = True) -> torch.Tensor
¤
Computes the soft rankings of x along the specified dim.
Arguments:
x: Input Array of shape (..., n, ...).dim: The dim along which to compute the ranking operation. If None, the input Array is flattened before computing the ranking. Defaults to None.descending: If True, larger inputs receive smaller ranks (best rank is 0). If False, ranks increase with the input values.softness: Softness of the function, should be larger than zero. Defaults to 1.modeandfast: These two arguments control the behavior of the ranking operation:mode="hard": Returns ranking computed as two jnp.argsort calls.fast=Falseandmode="entropic": Uses entropy-regularized optimal transport (implemented via Sinkhorn iterations) as in Differentiable Ranks and Sorting using Optimal Transport. Intuition: We can use the transportation plan obtained in soft sorting for ranking by transporting the sorted ranks (0, 1, ..., n-1) back to the ranks of the original values. Can be slow for largemax_iter.fast=Falseandmode="euclidean": Similar to entropic case, but using an L2-regularizer (implemented via projection onto Birkhoff polytope) as in Fast Differentiable Sorting and Ranking.fast=Trueandmode="entropic": Uses an adaptation of the "SoftSort" operator proposed in SoftSort: A Continuous Relaxation for the argsort Operator. This initializes the cost matrix based on the absolute difference ofxto the sorted values and then we crucially apply a single column normalization (instead of of row normalization in the original paper). This makes the resulting matrix a unimodal column stochastic matrix which is better suited for soft ranking. Note: Fast mode introduces gradient discontinuities when elements inxare not unique, but is much faster.fast=Trueandmode="euclidean": Similar to entropic fast case, but using a euclidean unit-simplex projection instead of softmax. To the best of our knowledge this variant is novel.
max_iter: Maximum number of iterations for the Sinkhorn algorithm ifmodeis "entropic", or for the projection onto the Birkhoff polytope ifmodeis "euclidean". Unused iffast=True.
Returns:
A positive Array of shape (..., n, ...) with values in [0, n-1]. The elements in (..., i, ...) represent the soft rank of the ith element along the specified dim.
Comparison operators¤
softtorch.greater(x: torch.Tensor, y: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic', epsilon: float = 1e-10) -> torch.Tensor
¤
Computes a soft approximation to elementwise x > y.
Uses a Heaviside relaxation so the output approaches 0 at equality.
Arguments:
x: First input Array.y: Second input Array, same shape asx.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", returns the exact comparison. Otherwise uses a soft Heaviside: "entropic", "euclidean", "cubic", or "quintic" spline. Defaults to "entropic".epsilon: Small offset so that as softness->0, greater returns 0 at equality.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise x > y.
softtorch.greater_equal(x: torch.Tensor, y: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic', epsilon: float = 1e-10) -> torch.Tensor
¤
Computes a soft approximation to elementwise x >= y.
Uses a Heaviside relaxation so the output approaches 1 at equality.
Arguments:
x: First input Array.y: Second input Array, same shape asx.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", returns the exact comparison. Otherwise uses a soft Heaviside: "entropic", "euclidean", "cubic", or "quintic" spline. Defaults to "entropic".epsilon: Small offset so that as softness->0, greater_equal returns 1 at equality.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise x >= y.
softtorch.less(x: torch.Tensor, y: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic', epsilon: float = 1e-10) -> torch.Tensor
¤
Computes a soft approximation to elementwise x < y.
Uses a Heaviside relaxation so the output approaches 0 at equality.
Arguments:
x: First input Array.y: Second input Array, same shape asx.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", returns the exact comparison. Otherwise uses a soft Heaviside: "entropic", "euclidean", "cubic", or "quintic" spline. Defaults to "entropic".epsilon: Small offset so that as softness->0, less returns 0 at equality.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise x < y.
softtorch.less_equal(x: torch.Tensor, y: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic', epsilon: float = 1e-10) -> torch.Tensor
¤
Computes a soft approximation to elementwise x <= y.
Uses a Heaviside relaxation so the output approaches 1 at equality.
Arguments:
x: First input Array.y: Second input Array, same shape asx.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", returns the exact comparison. Otherwise uses a soft Heaviside: "entropic", "euclidean", "cubic", or "quintic" spline. Defaults to "entropic".epsilon: Small offset so that as softness->0, less_equal returns 1 at equality.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise x <= y.
softtorch.equal(x: torch.Tensor, y: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic', epsilon: float = 1e-10) -> torch.Tensor
¤
Computes a soft approximation to elementwise x == y.
Implemented as a soft abs(x - y) <= 0 comparison.
Arguments:
x: First input Array.y: Second input Array, same shape asx.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", returns the exact comparison. Otherwise uses a soft Heaviside: "entropic", "euclidean", "cubic", or "quintic" spline. Defaults to "entropic".epsilon: Small offset so that as softness->0, equal returns 1 at equality.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise x == y.
softtorch.not_equal(x: torch.Tensor, y: torch.Tensor, softness: float = 1.0, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic', epsilon: float = 1e-10) -> torch.Tensor
¤
Computes a soft approximation to elementwise x != y.
Implemented as a soft abs(x - y) > 0 comparison.
Arguments:
x: First input Array.y: Second input Array, same shape asx.softness: Softness of the function, should be larger than zero. Defaults to 1.mode: If "hard", returns the exact comparison. Otherwise uses a soft Heaviside: "entropic", "euclidean", "cubic", or "quintic" spline. Defaults to "entropic".epsilon: Small offset so that as softness->0, not_equal returns 0 at equality.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise x != y.
softtorch.isclose(x: torch.Tensor, y: torch.Tensor, softness: float = 1.0, rtol: float = 1e-05, atol: float = 1e-08, mode: Literal['hard', 'entropic', 'euclidean', 'pseudohuber', 'cubic', 'quintic'] = 'entropic', epsilon: float = 1e-10) -> torch.Tensor
¤
Computes a soft approximation to jnp.isclose for elementwise comparison.
Implemented as a soft abs(x - y) <= atol + rtol * abs(y) comparison.
Arguments:
x: First input Array.y: Second input Array, same shape asx.softness: Softness of the function, should be larger than zero. Defaults to 1.rtol: Relative tolerance. Defaults to 1e-5.atol: Absolute tolerance. Defaults to 1e-8.mode: If "hard", returns the exact comparison. Otherwise uses a soft Heaviside: "entropic", "euclidean", "cubic", or "quintic" spline. Defaults to "entropic".epsilon: Small offset so that as softness->0, isclose returns 1 at equality.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise isclose(x, y).
Logical operators¤
softtorch.logical_and(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor
¤
Computes soft elementwise logical AND between two SoftBool Arrays.
Fuzzy logic implemented as all(stack([x, y], dim=-1), dim=-1).
Arguments:
x: First SoftBool input Array.y: Second SoftBool input Array.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise logical AND.
softtorch.logical_not(x: torch.Tensor) -> torch.Tensor
¤
Computes soft elementwise logical NOT of a SoftBool Array.
Fuzzy logic implemented as 1.0 - x.
Arguments:
- x: SoftBool input Array.
Returns:
SoftBool of same shape as x (Array with values in [0, 1]), relaxing the
elementwise logical NOT.
softtorch.logical_or(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor
¤
Computes soft elementwise logical OR between two SoftBool Arrays.
Fuzzy logic implemented as any(stack([x, y], dim=-1), dim=-1).
Arguments:
- x: First SoftBool input Array.
- y: Second SoftBool input Array.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise logical OR.
softtorch.logical_xor(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor
¤
Computes soft elementwise logical XOR between two SoftBool Arrays.
Arguments:
- x: First SoftBool input Array.
- y: Second SoftBool input Array.
Returns:
SoftBool of same shape as x and y (Array with values in [0, 1]), relaxing the
elementwise logical XOR.
softtorch.all(x: torch.Tensor, dim: int = -1, epsilon: float = 1e-10) -> torch.Tensor
¤
Computes soft elementwise logical AND across a specified dim. Fuzzy logic implemented as the geometric mean along the dim.
Arguments:
- x: SoftBool input Array.
- dim: Axis along which to compute the logical AND. Default is -1 (last dim).
- epsilon: Minimum value for numerical stability inside the log.
Returns:
SoftBool (Array with values in [0, 1]) with the specified dim reduced, relaxing the logical ALL along that dim.
softtorch.any(x: torch.Tensor, dim: int = -1) -> torch.Tensor
¤
Computes soft elementwise logical OR across a specified dim.
Fuzzy logic implemented as 1.0 - all(logical_not(x), dim=dim).
Arguments:
- x: SoftBool input Array.
- dim: Axis along which to compute the logical OR. Default is -1 (last dim).
Returns:
SoftBool (Array with values in [0, 1]) with the specified dim reduced, relaxing t he logical ANY along that dim.
Selection operators¤
softtorch.where(condition: torch.Tensor, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor
¤
Computes a soft elementwise selection between two Arrays based on a SoftBool
condition. Fuzzy logic implemented as x * condition + y * (1.0 - condition).
Arguments:
- condition: SoftBool condition Array, same shape as x and y.
- x: First input Array, same shape as condition.
- y: Second input Array, same shape as condition.
Returns:
Array of the same shape as x and y, interpolating between x and y according
to condition in [0, 1].
softtorch.take_along_dim(x: torch.Tensor, soft_index: torch.Tensor, dim: int | None = None) -> torch.Tensor
¤
Performs a soft version of torch.take_along_dim via a weighted dot product.
Arguments:
x: Input Array of shape (..., n, ...).soft_index: A SoftIndex of shape (..., k, ..., [n]) (positive Array which sums to 1 over the last dimension).dim: Axis along which to apply the soft index. If None, the input is flattened before applying the soft indices. Defaults to None.
Returns:
Array of shape (..., k, ...), representing the result after soft selection along the specified dim.
softtorch.take(x: torch.Tensor, soft_index: torch.Tensor, dim: int | None = None) -> torch.Tensor
¤
Performs a soft version of torch.take via a weighted dot product.
Arguments:
x: Input Array of shape (..., n, ...).soft_index: A SoftIndex of shape (k, [n]) (positive Array which sums to 1 over the last dimension).dim: Axis along which to apply the soft index. If None, the input is flattened. Defaults to None.
Returns:
Array of shape (..., k, ...) after soft selection.
softtorch.index_select(x: torch.Tensor, soft_index: torch.Tensor, dim: int, keepdim: bool = True) -> torch.Tensor
¤
Performs a soft version of torch.index_select via a weighted dot product.
Arguments:
x: Input Array of shape (..., n, ...).soft_index: A SoftIndex of shape ([n],) (positive Array which sums to 1 over the last dimension).dim: Axis along which to apply the soft index. Defaults to 0.keepdim: If True, keeps the reduced dimension as a singleton {1}.
Returns:
Array after soft indexing, shape (..., {1}, ...).
softtorch.narrow(x: torch.Tensor, soft_start_index: torch.Tensor, length: int, dim: int = 0) -> torch.Tensor
¤
Performs a soft version of torch.narrow via a weighted dot product.
Arguments:
x: Input Array of shape (..., n, ...).soft_index: A SoftIndex of shape ([n],) (positive Array which sums to 1 over the last dimension).length: Length of the slice to extract.dim: Axis along which to apply the soft slice. Defaults to 0.
Returns:
Array of shape (..., length, ...) after soft slicing.