public class MinMaxScalerModel extends Model<MinMaxScalerModel> implements MinMaxScalerParams, MLWritable
MinMaxScaler
.
param: originalMin min value for each original column during fitting param: originalMax max value for each original column during fitting
TODO: The transformer does not yet set the metadata in the output column (SPARK-8529).
Modifier and Type | Method and Description |
---|---|
MinMaxScalerModel |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
Param<String> |
inputCol()
Param for input column name.
|
static MinMaxScalerModel |
load(String path) |
DoubleParam |
max()
upper bound after transformation, shared by all features
Default: 1.0
|
DoubleParam |
min()
lower bound after transformation, shared by all features
Default: 0.0
|
Vector |
originalMax() |
Vector |
originalMin() |
Param<String> |
outputCol()
Param for output column name.
|
static MLReader<MinMaxScalerModel> |
read() |
MinMaxScalerModel |
setInputCol(String value) |
MinMaxScalerModel |
setMax(double value) |
MinMaxScalerModel |
setMin(double value) |
MinMaxScalerModel |
setOutputCol(String value) |
String |
toString() |
Dataset<Row> |
transform(Dataset<?> dataset)
Transforms the input dataset.
|
StructType |
transformSchema(StructType schema)
Check transform validity and derive the output schema from the input schema.
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
MLWriter |
write()
Returns an
MLWriter instance for this ML instance. |
transform, transform, transform
params
getMax, getMin, validateAndTransformSchema
getInputCol
getOutputCol
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
save
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public static MLReader<MinMaxScalerModel> read()
public static MinMaxScalerModel load(String path)
public DoubleParam min()
MinMaxScalerParams
min
in interface MinMaxScalerParams
public DoubleParam max()
MinMaxScalerParams
max
in interface MinMaxScalerParams
public final Param<String> outputCol()
HasOutputCol
outputCol
in interface HasOutputCol
public final Param<String> inputCol()
HasInputCol
inputCol
in interface HasInputCol
public String uid()
Identifiable
uid
in interface Identifiable
public Vector originalMin()
public Vector originalMax()
public MinMaxScalerModel setInputCol(String value)
public MinMaxScalerModel setOutputCol(String value)
public MinMaxScalerModel setMin(double value)
public MinMaxScalerModel setMax(double value)
public Dataset<Row> transform(Dataset<?> dataset)
Transformer
transform
in class Transformer
dataset
- (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)public MinMaxScalerModel copy(ParamMap extra)
Params
defaultCopy()
.copy
in interface Params
copy
in class Model<MinMaxScalerModel>
extra
- (undocumented)public MLWriter write()
MLWritable
MLWriter
instance for this ML instance.write
in interface MLWritable
public String toString()
toString
in interface Identifiable
toString
in class Object