Class DataSource

java.lang.Object
com.smartgwt.client.docs.serverds.DataSource

public class DataSource extends Object
A DataSource is data-provider-independent description of a set of objects that will be loaded, edited and saved within the user interface of your application.

This class is not meant to be created and used, it is actually documentation of settings allowed in a DataSource descriptor (.ds.xml file), for use with Smart GWT Pro Edition and above. See com.smartgwt.client.docs.serverds for how to use this documentation.

Each DataSource consists of a list of fields that make up a DataSource record, along with field types, validation rules, relationships to other DataSources, and other metadata.

The abstract object description provided by a DataSource is easily mapped to a variety of backend object models and storage schemes. The following table shows analogous terminology across systems.

Isomorphic Smart GWT Relational Database Enterprise Java Beans (EJB) Entity/Relationship Modeling OO/UML XML Schema/WSDL LDAP
DataSource Table EJB class Entity Class Element Schema (ComplexType) Objectclass
Record Row EJB instance Entity instance Class instance/Object Element instance (ComplexType) Entry
Field Column Property Attribute Property/Attribute Attribute or Element (SimpleType) Attribute

DataSources can be declared in either JavaScript or XML format, and can also be imported from existing metadata formats, including XML Schema.

Data Binding is the process by which Data Binding-capable UI components can automatically configure themselves for viewing, editing and saving data described by DataSources. DataBinding is covered in the 'QuickStart Guide', Chapter 6, Data Binding.

Data Integration is the process by which a DataSource can be connected to server systems such as SQL DataBases, Java Object models, WSDL web services and other data providers. Data Integration comes in two variants: client-side and server-side. Server-side integration uses the Smart GWT Java-based server to connect to data represented by Java Objects or JDBC-accessible databases. Client-side integration connects Smart GWT DataSources to XML, JSON or other formats accessible via HTTP.

DataSources have a concept of 4 core operations ("fetch", "add", "update" and "remove") that can be performed on the set of objects represented by a DataSource. Once a DataSource has been integrated with your data store, databinding-capable UI components can leverage the 4 core DataSource operations to provide many complete user interactions without the need to configure how each individual component loads and saves data.

These interactions include grid views, tree views, detail views, form-based editing and saving, grid-based editing and saving, and custom interactions provided by Pattern Reuse Example custom databinding-capable components.

See Also:
  • Field Details

    • canAggregate

      public Boolean canAggregate
      By default, all DataSources are assumed to be capable of handling ServerSummaries on fetch or filter type operations. This property may be set to false to indicate that this dataSource does not support serverSummaries.

      NOTE: If you specify this property in a DataSource descriptor (.ds.xml file), it is enforced on the server. This means that if you run a request containing serverSummaries against a DataSource that advertises itself as canAggregate:false, it will be rejected.

      Default value is null

      See Also:
    • sqlUsePagingHint

      public Boolean sqlUsePagingHint
      If explicitly set true or left null, causes the server to use a "hint" in the SQL we generate for paged queries. If explicitly set false, forces off the use of hints. This property can be overridden per operationBinding - see OperationBinding.sqlUsePagingHint.

      Note this property is only applicable to SQL DataSources, only when a paging strategy of "sqlLimit" is in force, and it only has an effect for those specific database products where we employ a native hint in the generated SQL in an attempt to improve performance.

      Default value is null

      See Also:
    • suppressAutoMappings

      public Boolean suppressAutoMappings
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.requestTemplate).

      By default, if you have a requestFormat of "params", RestConnector will add your values or criteria as standard HTTP parameters to the the URL it generates for hitting the target REST server - this is described in more detail in the RESTRequestFormat documentation.

      With a requestFormat of "json", RestConnector will generate a block of JSON from your criteria or values, again as described in the RESTRequestFormat docs

      You can switch off both of these behaviors by setting this property true

      Default value is false

      See Also:
    • resultBatchSize

      public int resultBatchSize
      Very advanced: for servers that do not support paging, and must return large numbers of XML records in one HTTP response, Smart GWT breaks up the processing of the response in order to avoid the "script running slowly" dialog appearing for an end user.

      If you have a relatively small number of records with a great deal of properties or subobjects on each record, and you have not set dropExtraFields to eliminate unused data, and you see the "script running slowly" dialog, you may need to set this number lower.

      Default value is 150

    • beanClassName

      public String beanClassName
      This property has different meanings depending on the serverType:

      For SQL DataSources (DataSources with serverType "sql")
      If set, results from the database will be used to create one instance of the indicated Java bean per database row. Otherwise a Map is used to represent each row retrieved from SQL.

      With this feature active, a DSResponse from this DataSource will contain a Collection of instances of the indicated beanClassName, available via DSResponse.getData(). This creates a couple of possibilities:

      Add business logic for derived properties, such as computed formulas
      For example, declare a DataSourceField named "revenueProjection". By default this field will call getRevenueProjection() on your bean to retrieve the value to send to the client. Your implementation of getRevenueProjection() could apply some kind of formula to other values loaded from the database.
      Call business logic on retrieved beans via DMI
      By adding a DMI method that calls DSRequest.execute() to retrieve a DSResponse, you have an opportunity to call business logic methods on the beans representing each row affected by the DSRequest. For example, notify a related BPEL process of changes to certain fields.

      By using beanClassName on a specific OperationBinding, you can:

      • Use a bean to represent your data only when it matters; for example, avoid the overhead of using a bean for "fetch" operations, but do use a bean for "update" operations so that you can execute relevant business logic after the update completes.
      • Skip the use of beans for complex reporting queries that produce results unrelated to your persistent object model. Set beanClassName to blank ("") on a specific operationBinding to override DataSource.beanClassName for that specific operation.
      • For SQL joins that produce additional data fields, use a special, operation-specific bean that represents a join of multiple entities and contains business logic specific to that joined dataset

      Note that beanClassName affects what numeric field types will be used for inbound DSRequest data. For fields with numeric types, the record data in DSRequests will automatically be converted to the type of the target field, before the request is received in a DMI. For details, see DsRequestBeanTypes.

      Note that DMI also has a built-in facility for populating a bean with the inbound DSRequest.data - just declare the bean as a method argument.

      For generic DataSources (DataSources with serverType "generic")
      Reify sets this property when it creates a generic DataSource using the Javabean Wizard. It has no built-in server-side effects.

      For Hibernate DataSources (DataSources with serverType "hibernate")
      The name of the Java bean or POJO class that is mapped in Hibernate. This will typically be the fully-qualified class name - eg com.foo.MyClass - but it may be the simple class name - just MyClass - or it may be some other value. It all depends on how your classes have been mapped in Hibernate.

      The declared Java bean will affect how its properties will be mapped to built-in numeric types, see Hibernate Integration overview for details.

      Note: If you are intending to use Hibernate as a data-access layer only, you do not need to create Hibernate mappings or Java objects: Smart GWT will generate everything it needs on the fly.

      For JPA DataSources (DataSources with serverType "jpa" or "jpa1")
      The fully qualified class name of the JPA annotated entity.

      NOTE for Hibernate and JPA users: When you use JPA, or use Hibernate as a full ORM system (ie, not just allowing Smart GWT Server to drive Hibernate as a data access layer), the beans returned on the server-side are live. This means that if you make any changes to them, the ORM system will persist those changes. This is true even if the beans were created as part of a fetch operation.

      This causes a problem in the common case where you want to use a DMI or custom DataSource implementation to apply some post-processing to the beans fetched from the persistent store. If you change the values in the beans directly, those changes will be persisted.

      If you want to alter the data returned from a JPA or Hibernate persistent store as part of a fetch request just so you can alter what gets sent to the client, you can use the server-side DSResponse's getRecords() method. This will return your bean data in "record" format - ie, as a List of Maps. You can alter these records without affecting your persistent store, and then call setData() on the DSResponse), passing the altered list of records. See the server-side Javadocs for DSResponse for details of these two methods.

      Default value is null

      See Also:
    • auditDataSourceID

      public String auditDataSourceID
      For DataSources with auditing enabled, optionally specifies the ID of the audit DataSource. If this property is not specified, the ID of the audit DataSource will be audit_[OriginalDSID]

      Default value is null

    • serverConstructor

      public String serverConstructor
      This property allows you to write and use custom DataSource subclasses on the server, by specifying either
      • the fully-qualified name of the DataSource subclass that should be instantiated server-side for this dataSource, or
      • the token "spring:" followed by a valid Spring bean ID, if you wish to instantiate your custom dataSource object using Spring dependency injection. For example, "spring:MyDataSourceBean". See also ServerInit for special concerns with framework initialization when using Spring. It is also particularly important that you read the discussion of caching and thread-safety linked to below, as there are special considerations in this area when using Spring.
      • the token "cdi:" followed by a valid CDI bean name, if you wish to instantiate your custom dataSource object using CDI dependency injection. For example, "cdi:MyDataSourceBean".

      One reason you might wish to do this would be to override the validate() method to provide some arbitrary custom validation (such as complex database lookups, validation embedded in legacy applications, etc). It is also possible - though obviously a more substantial task - to override the execute() method in your custom DataSource. This is one way of creating a completely customized DataSource implementation.

      Note: If you use this property, you are responsible for making sure that it refers to a valid server-side class that extends com.isomorphic.datasource.BasicDataSource, or to a Spring bean of the same description. If your implementation relies on methods or state only present in certain specialized subclasses of DataSource (for example, you want the normal behavior and features of a HibernateDataSource, but with a specialized validate() method), then you should extend the subclass rather than the base class.

      NOTE: Please take note of the points made in this discussion of caching and thread-safety issues in server-side DataSources.

      Default value is null

    • dataFormat

      public DSDataFormat dataFormat
      Indicates the format to be used for HTTP requests and responses when fulfilling DSRequests (eg, when fetchData() is called).

      Default value is "iscServer"

      See Also:
    • jsonSuffix

      public String jsonSuffix
      Allows you to specify an arbitrary suffix string to apply to all json format responses sent from the server to this application.

      The inclusion of such a suffix ensures your code is not directly executable outside of your application, as a preventative measure against javascript hijacking.

      Only applies to responses formatted as json objects. Does not apply to responses returned via scriptInclude type transport.

      Default value is null

      See Also:
    • sequenceMode

      public SequenceMode sequenceMode
      For fields of type "sequence" in a dataSource of serverType "sql", indicates the SequenceMode to use. This property has no effect for fields or dataSources of other types.

      You can set a default sequenceMode for all DataSources of a given database type by setting property "sql.{database_type}.default.sequence.mode" in server.properties. You set a global default sequenceMode that applies to all database types by setting property "sql.default.sequence.mode". For example:

          sql.mysql.default.sequence.mode: jdbcDriver
        

      Default value is "native"

    • sqlPaging

      public SQLPagingStrategy sqlPaging
      The paging strategy to use for this DataSource. If this property is not set, the default paging strategy, specified with the server.properties setting sql.defaultPaging, is used.

      This setting can be overridden with the OperationBinding.sqlPaging property.

      NOTE: Operations that involve a customSQL clause ignore this property, because customSQL operations usually need to be treated as special cases. For these operations, the paging strategy comes from the server.properties setting sql.defaultCustomSQLPaging or sql.defaultCustomSQLProgressivePaging, depending on whether or not progressiveLoading is in force. Note that these can always be overridden by a sqlPaging setting on the OperationBinding.

      Default value is null

      See Also:
    • script

      public String script
      Default scriptlet to be executed on the server for each operation. If OperationBinding.script is specified, it will be executed for the operation binding in question instead of running this scriptlet.

      Scriptlets are used similarly to DMIs configured via serverObject or OperationBinding.serverObject - they can add business logic by modifying the DSRequest before it's executed, modifying the default DSResponse, or taking other, unrelated actions.

      For example:

           <DataSource>
              <script language="groovy">
                 ... Groovy code ...
              </script>
             ... other DataSource properties
           </DataSource>
        

      Scriptlets can be written in any language supported by the "JSR 223" standard, including Java itself. See the DMI Script Overview for rules on how to return data, add additional imports, and other settings.

      The following variables are available for DMI scriptlets:

      • requestContext: RequestContext (from com.isomorphic.servlet)
      • dataSource: the current DataSource (same as DSRequest.getDataSource())
      • dsRequest: the current DSRequest
      • criteria: shortcut to DSRequest.getCriteria() (a Map)
      • values: shortcut to DSRequest.getValues() (a Map)
      • oldValues: shortcut to DSRequest.getOldValues() (a Map)
      • log: an instance of com.isomorphic.log.Logger, so your scripts can log in the same manner as regular Java code
      • config: an instance of com.isomorphic.base.Config, so your scripts have access to your server.properties settings
      • sqlConnection: SQLDataSource only: the current SQLConnection object. If using automatic transactions are enabled, this SQLConnection is in the context of the current transaction
      • rpcManager: the current RPCManager
      • applicationContext: the Spring ApplicationContext (when applicable)
      • beanFactory: the Spring BeanFactory (when applicable)

      Scriptlets also have access to a set of contextual variables related to the Servlets API, as follows:

      • servletRequest: the current ServletRequest
      • session: the current HttpSession
      • servletResponse: the current ServletResponse (advanced use only)
      • servletContext: the current ServletContext(advanced use only)
      As with DMI in general, be aware that if you write scriptlets that depend upon these variables, you preclude your DataSource from being used in the widest possible variety of circumstances. For example, adding a scriptlet that relies on the HttpSession prevents your DataSource from being used in a command-line process.

      Note that if a dataSource configuration has both a <script> block and a specified serverObject for some operation, the script block will be executed, and the serverObject ignored.

      Default value is null

    • fileNameField

      public String fileNameField
      The native field name used by this DataSource on the server to represent the fileName for FileSource Operations operations. Any extensions to the fileName to indicate type or format (e.g. ".ds.xml") are stored in the fileTypeField and fileFormatField, if specified for this DataSource.

      If not specified for a DataSource, the fileNameField will be inferred on the server as follows:

      • If there is a field named "fileName", "name", or "title", then that field is used.
      • Otherwise, if there is a single primary key, and it has the type "text", then that field is used.
      • Otherwise, an error is logged

      Default value is null

      See Also:
    • autoJoinTransactions

      public Boolean autoJoinTransactions
      If true, causes all operations on this DataSource to automatically start or join a transaction associated with the current HttpServletRequest. This means that multiple operations sent to the server in a request queue will be committed in a single transaction.

      Note that this includes fetch operations - setting this property to true has the same effect as a transaction policy of ALL for just this DataSource's operations - see the server-side RPCManager.setTransactionPolicy() for details of the different TransactionPolicy settings.

      If this property is explicitly false, this causes all operations on this DataSource to be committed individually - the same as a transaction policy of NONE, just for this DataSource's operations.

      In either case, you can override the setting for individual operations - see OperationBinding.autoJoinTransactions.

      If this property if null or not defined, we fall back to the default setting for this type of DataSource. These are defined in server.properties as follows:

      • Hibernate: hibernate.autoJoinTransactions
      • JPA/JPA2: jpa.autoJoinTransactions
      • SQL: There is one setting per configured database connection (dbName). For example, the setting for the default MySQL connection is sql.Mysql.autoJoinTransactions
      If the setting is not defined at the DataSource-type level, we use the system global default, which is defined in server.properties as autoJoinTransactions.

      At the dbName and global system levels, you can set the autoJoinTransactions attribute to a valid Transaction Policy, rather than a simple true or false (although these values work too - true is the same as ALL, false is the same as NONE). For valid TransactionPolicy values and their meanings, see the server-side Javadoc for RPCManager.setTransactionPolicy()

      Note that the configuration settings discussed here can be overridden for a particular queue of requests by setting the server-side RPCManager's transaction policy. Look in the server-side Javadoc for RPCManager.getTransactionPolicy().

      Transactions can also be initiated manually, separate from the RPCManager/HttpServletRequest lifecycle, useful for both multi-threaded web applications, and standalone applications that don't use a servlet container - see StandaloneDataSourceUsage.

      NOTE: Setting this property to true does not cause a transactional persistence mechanism to automatically appear - you have to ensure that your DataSource supports transactions. The Smart GWT built-in SQL, Hibernate and JPA DataSources support transactions, but note that they do so at the provider level. This means that you can combine updates to, say, an Oracle database and a MySQL database in the same queue, but they will be committed in two transactions - one per database. The Smart GWT server will commit or rollback these two transactions as if they were one, so a failure in some Oracle update would cause all the updates to both databases to be rolled back. However, this is not a true atomic transaction; it is possible for one transaction to be committed whilst the other is not - in the case of hardware failure, for example.

      NOTE: Not all the supported SQL databases are supported for transactions. Databases supported in this release are:

      • DB2
      • HSQLDB
      • Firebird
      • Informix
      • Microsoft SQL Server
      • MySQL (you must use InnoDB tables; the default MyISAM storage engine does not support transactions)
      • MariaDB
      • Oracle
      • PostgreSQL

      Default value is null

      See Also:
    • auditUserFieldName

      public String auditUserFieldName
      For DataSources with auditing enabled, specifies the field name used to store the user who performed the operation. If empty string is specified as the field name, the audit DataSource will not store this field.

      Default value is "audit_modifier"

    • operationBindings

      public OperationBinding[] operationBindings
      Optional array of OperationBindings, which provide instructions to the DataSource about how each DSOperation is to be performed.

      When using the Smart GWT Server, OperationBindings are specified in your DataSource descriptor (.ds.xml file) and control server-side behavior such as what Java object to route DSRequest to (OperationBinding.serverObject) or customizations to SQL, JQL and HQL queries (OperationBinding.customSQL, OperationBinding.customJQL and OperationBinding.customHQL). See the @see Java Integration samples.

      For DataSources bound to WSDL-described web services using serviceNamespace, OperationBindings are used to bind each DataSource operationType to an operation of a WSDL-described web service, so that a DataSource can both fetch and save data to a web service.

      For example, this code accomplishes part of the binding to the SalesForce partner web services

             DataSource dataSource = new DataSource();
             dataSource.setServiceNamespace("urn:partner.soap.sforce.com");
             OperationBinding fetch = new OperationBinding();
             fetch.setOperationType(DSOperationType.FETCH);
             fetch.setWsOperation("query");
             fetch.setRecordName("sObject");
             OperationBinding add = new OperationBinding();
             add.setOperationType(DSOperationType.ADD);
             add.setWsOperation("create");
             add.setRecordName("SaveResult");
             OperationBinding update = new OperationBinding();
             update.setOperationType(DSOperationType.UPDATE);
             update.setWsOperation("update");
             update.setRecordName("SaveResult");
             OperationBinding remove = new OperationBinding();
             remove.setOperationType(DSOperationType.REMOVE);
             remove.setWsOperation("delete");
             remove.setRecordName("DeleteResult");
             dataSource.setOperationBindings(fetch, add, update, remove);
        
      NOTE: additional code is required to handle authentication and other details, see the complete code in smartclientSDK/examples/databinding/SalesForce.

      For DataSources that contact non-WSDL-described XML or JSON services, OperationBindings can be used to separately configure the URL, HTTP method, input and output processing for each operationType. This makes it possible to fetch JSON data from one URL for the "fetch" operationType and save to a web service for the "update" operationType, while appearing as a single integrated DataSource to a DataBoundComponent such as an editable ListGrid.

      If no operationBinding is defined for a given DataSource operation, all of the properties which are valid on the operationBinding are checked for on the DataSource itself.

      This also means that for a read-only DataSource, that is, a DataSource only capable of fetch operations, operationBindings need not be specified, and instead all operationBinding properties can be set on the DataSource itself. In the RSS Feed sample, you can see an example of using OperationBinding properties directly on the DataSource in order to read an RSS feed.

      Default value is null

      See Also:
    • logSlowRemove

      public Integer logSlowRemove
      Allows you to specify "remove" operation SQL query execution time threshold in milliseconds, which if exceeded query is identified as "slow" and may be logged under specific logging category.

      See logSlowSQL for more details.

      Default value is null

    • fileTypeField

      public String fileTypeField
      The native field name used by this DataSource on the server to represent the fileType for FileSource Operations.

      If the fileTypeField is not configured, then a field named "fileType" will be used, if it exists. Otherwise, the DataSource will not track fileTypes -- this may be acceptable if, for instance, you use a separate DataSource for each fileType.

      The fileType is specified according to the extension that would have been used in the filesystem -- for instance, the fileType for employees.ds.xml would be "ds".

      Default value is null

      See Also:
    • useFlatFields

      public Boolean useFlatFields
      Like DataBoundComponent.useFlatFields, but applies to all DataBound components that bind to this DataSource.

      Default value is null

    • infoField

      public String infoField
      Name of the field that has the second most pertinent piece of textual information in the record, for use when a DataBoundComponent needs to show a short summary of a record.

      For example, for a DataSource of employees, a "job title" field would probably be the second most pertinent text field aside from the employee's "full name".

      Unlike titleField, infoField is not automatically determined in the absence of an explicit setting.

      Default value is null

      See Also:
    • xmlNamespaces

      public Map xmlNamespaces
      Sets the XML namespace prefixes available for XPaths on a DataSource-wide basied. See OperationBinding.xmlNamespaces for details.

      Default value is See below

      See Also:
    • multiInsertNonMatchingStrategy

      public MultiInsertNonMatchingStrategy multiInsertNonMatchingStrategy
      For dataSources of serverType "sql" only, this property sets the multi-insert "non matching" strategy for add requests on this dataSource. Only has an effect if the add request specifies a list of records as the data, and only if multiInsertStrategy is set to "multipleValues" either globally or at the DSRequest, OperationBinding, or DataSource level. See the docs for MultiInsertNonMatchingStrategy for more information.

      Note that this setting (along with the other multi-insert properties, multiInsertStrategy and multiInsertBatchSize) can be overridden at the operationBinding level and the dsRequest level. If you wish to configure multi-insert setting globally, see the documentation for multiInsertStrategy.

      Default value is "padWithNulls"

      See Also:
    • requires

      public VelocityExpression requires
      Indicates that the specified VelocityExpression must evaluate to true for a user to access this DataSource.

      See also OperationBinding.requires.

      Default value is null

      See Also:
    • ID

      public Identifier ID
      Unique identifier for this DataSource. Required for all DataSources.

      Default value is null

    • maxFileVersions

      public Integer maxFileVersions
      If automatic file versioning is enabled for a FileSource DataSource, this property configures the maximum number of versions to retain.

      Default value is 20

      See Also:
    • allowDynamicTreeJoins

      public Boolean allowDynamicTreeJoins
      By default, custom dataSource implementations are assumed to be unable to support dynamic tree joins. If you create a custom dataSource that can support such joins, set this flag to true

      Default value is null

    • tagName

      public String tagName
      Tag name to use when serializing to XML. If unspecified, the dataSource.ID will be used.

      Default value is null

      See Also:
    • generateAuditDS

      public boolean generateAuditDS
      For an audited DataSource, controls whether the Framework will attempt to auto-generate the audit DataSource. Note that this property is independent of auditDataSourceID so that, by default, even when the audit DataSource is given a non-default ID, the Framework will still attempt to auto-generate it.

      Default value is true

    • allowClientRequestedSummaries

      public Boolean allowClientRequestedSummaries
      If a DSRequest arrives from the client that requests server-calculated summaries, should it be allowed?

      Note this setting only affects dsRequests that come from the browser (or another client). This setting has no effect on server summaries declared in .ds.xml files or summaries configured in DSRequests created programmatically on the server side, which are always allowed.

      Default value of null means this DataSource will use the system-wide default, which is set via datasources.allowClientRequestedSummaries in server.properties, and defaults to allowing client-requested summaries.

      If client-requested summarization is allowed, but the server-side <operationBinding> provides specific summarization settings, the client-requested summarization is ignored.

      Default value is null

      See Also:
    • responseTemplate

      public String responseTemplate
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.responseTemplate).

      If you have a responseFormat of "json" or "xml", this property allows you to specify a template for RestConnector to use in order to construct response data for the client, from the response data sent by the REST server. This allows you to use the full power of the Velocity Templating Language to perform mappings and other transformations, all declaratively. The process involves the following steps:

      1. We parse the REST response text as either JSON or XML, depending on the responseFormat
      2. If a record-level XPath is declared, it is applied to the parsed REST response data
      3. We parse the template as JSON and convert it into an internal object structure
      4. Velocity templating is now performed on this structure, with the processed REST response data provided as context variables. Templates intended for use with multi-record datasets should be framed in terms of a single record - if the REST response data consists of multiple records, each record will be put through the template in turn
      5. If the original request came from a remote client (eg, a browser-based UI), the same downstream serialization logic as used for all other DataSource types will serialize this processed structure in accordance with the client-server dataFormat in use; this is completely separate from the responseFormat used to process the response from the REST server
      Note, if you provide a responseTemplate, it serves as the basis for the entire JSON or XML structure sent back the client, so you can include, omit, rename or transform values from the client request as required.

      Example template

      Consider a case where the REST server sends back this structure:
        {
             "ID": 1234,
             "FOO": 27,
             "BAR": "TRUE",
             "ABC": "String Value"
        }
      Or, equivalently for a responseFormat of XML (note, this would also require a recordXPath of "/ITEM" to navigate to the subvalues):
        <?xml version="1.0" encoding="UTF-8"?>
        <ITEM>
            <ID>1234</ID>
            <FOO>27</FOO>
            <BAR>TRUE</BAR>
            <ABC>String value</ABC>
        </ITEM>
      You need to transform this into the record format required by your local dataSource, which looks like this:
        {
             id: 1234, // ID from the REST response
             zoo: 27,  // FOO from the REST response
             baz: true,  // BAR from the REST response
             def: "String value"  // abc from the REST response
             isTBD: false  // should be true if "def" == "TBD", otherwise false
        }
      The following template would declaratively handle the necessary mappings from one format to the other:
        <responseTemplate>
        { 
             id: $restResponseData.ID,
             zoo: $restResponseData.FOO,
             baz: #if($restResponseData.BAR == "TRUE") true #else false #end,
             def: "$restResponseData.ABC",
             isTBD: #if($restResponseData.ABC == "TBD") true #else false #end
        }
        </responseTemplate>
      Logically, it makes sense to express the template in JSON: XML is only the format of the data sent back by the remote REST server; by the time we come to apply the template, that XML has already been parsed into an internal memory structure based on Java Lists and Maps, and JSON is the simplest text-based analog of that data structure.

      However, if you have a requestFormat of "xml", you will be expressing your requestTemplate in XML; again, this makes sense because the requestTemplate is a representation of the actual text we are going to send to the remote REST server, whereas the responseTemplate is not a direct representation of anything (post-templating, it will be serialized in the format that the client requires, which unrelated to the format required by the remote REST server).

      Even though this difference makes sense, it may seem confusing or inconsistent to express the requestTemplate as XML and the responseTemplate as JSON, so when requestFormat is "xml", we support expressing the responseTemplate in either JSON or XML. So if you prefer, you could write this template in XML:

        <responseTemplate><![CDATA[
          <ITEM>
             <id>$restResponseData.ID<is/id>
             <zoo>$restResponseData.FOO</zoo>
             <baz>#if($restResponseData.BAR == "TRUE") true #else false #end</baz>
             <def>$restResponseData.ABC</def>
             <isTBD>#if($restResponseData.ABC == "TBD") true #else false #end<isTBD>
          </ITEM>
        ]]></responseTemplate>
      See the Velocity User Guide for full details of Velocity's templating features

      Default value is null

      See Also:
    • useLocalValidators

      public Boolean useLocalValidators
      Whether to attempt validation on the client at all for this DataSource. If unset (the default), client-side validation is enabled.

      Disabling client-side validation entirely is a good way to test server-side validation.

      Default value is null

      See Also:
    • enumTranslateStrategy

      public EnumTranslateStrategy enumTranslateStrategy
      Sets the strategy this DataSource uses to translate Java enumerated types (objects of type enum) to and from Javascript. This property is only applicable if you are using the Smart GWT server

      Default value is null

    • clientOnly

      public Boolean clientOnly
      A clientOnly DataSource simulates the behavior of a remote data store by manipulating a static dataset in memory as DSRequests are executed on it. Any changes are lost when the user reloads the page or navigates away.

      A clientOnly DataSource will return responses asynchronously, just as a DataSource accessing remote data does. This allows a clientOnly DataSource to be used as a temporary placeholder while a real DataSource is being implemented - if a clientOnly DataSource is replaced by a DataSource that accesses a remote data store, UI code for components that used the clientOnly DataSource will not need be changed.

      A clientOnly DataSource can also be used as a shared cache of modifiable data across multiple UI components when immediate saving is not desirable. In this case, several components may interact with a clientOnly DataSource and get the benefit of ResultSet behaviors such as automatic cache sync and in-browser data filtering and sorting. When it's finally time to save, cacheData can be inspected for changes and data can be saved to the original DataSource via addData(), updateData() and removeData(), possibly in a transactional queue. Note that getClientOnlyDataSource() lets you easily obtain a clientOnly DataSource representing a subset of the data available from a normal DataSource.

      See also cacheAllData - a cacheAllData behaves like a write-through cache, performing fetch and filter operations locally while still performing remote save operations immediately.

      ClientOnly DataSources can be populated programmatically via cacheData - see this discussion for other ways to populate a client-only DataSource with data.

      Default value is false

      See Also:
    • ignoreTextMatchStyleCaseSensitive

      public Boolean ignoreTextMatchStyleCaseSensitive
      For fields on this dataSource that specify ignoreTextMatchStyle true, the prevailing textMatchStyle is ignored and Smart GWT matches exact values. This property dictates whether that match is case-sensitive like the "exactCase" textMatchStyle, or case-insensitive like the "exact" textMatchStyle (the default). Please see the TextMatchStyle documentation for a discussion of the nuances of case-sensitive matching.

      Default value is false

    • responseFormat

      public RESTResponseFormat responseFormat
      For a RestConnector DataSource, the response format to use. Can be overridden at operationBinding level, see OperationBinding.responseFormat Note, if responseFormat is not specified at either the DataSource or OperationBinding level, response processing will throw an exception.

      Default value is null

      See Also:
    • criteriaPolicy

      public CriteriaPolicy criteriaPolicy
      Decides under what conditions the ResultSet cache should be dropped when the ResultSet.criteria changes.

      Default value is "dropOnShortening"

      See Also:
    • requestTemplate

      public String requestTemplate
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.requestTemplate).

      If you have a requestFormat of "json" or "xml", RestConnector will generate a block of JSON or XML, as appropriate, from your criteria or values, as described in the RESTRequestFormat docs. You can also specify a requestTemplate, and RestConnector will use the template to drive the generation of the JSON or XML, so you can use the full power of the Velocity Templating Language to perform mappings and other transformations, all declaratively. The process involves the following steps:

      1. We parse the template as JSON or XML, as appropriate, and convert it into an internal object structure
      2. Velocity templating is now performed on this structure, with the DSRequest values and criteria as context variables
      3. We serialize the result of this process back to JSON or XML
      Note, if you provide a requestTemplate, it serves as the basis for the entire JSON or XML structure sent to the REST server, so you can include, omit, rename or transform values from the client request as required. Also note, if the dsRequest has multiple valueSets (records), we will apply the template to each valueSet in turn, to construct a list for sending to the remote REST service (but note, this only works if the requestFormat is "json").

      Example templates

      Templates can make use of all of the conditional and iteration features of the Velocity Template Language, so can be used to perform some fairly sophisticated transformations. Consider a case where we have an update request with data like this sent from the client:
        {
             pk: 1234,
             foo: 27,
             bar: true,
             abc: "String Value"
        }
      This record contains all the information you need to send an update request to the remote REST server, but that server requires things in a different format, with different field names, different representations of boolean values, and an additional field whose value is derived. This record looks like this:
        {
             id: 1234, // pk from our client record
             FOO: 27,  // foo from our client record
             BAR: 1,  // bar from our client record, true == 1, false == 0
             def: "String value"  // abc from our client record
             isTBD: 0  // should be 1 if "def" == "TBD", otherwise 0
        }
      The following template would declaratively handle the necessary mappings from one format to the other:
        <requestTemplate>
        { 
             "id": $values.pk,
             "FOO": $values.foo,
             "BAR": #if($values.bar) 1 #else 0 #end,
             "def": "$values.abc",
             "isTBD": #if($values.abc == "TBD") 1 #else 0 #end
        }
        </requestTemplate>
      And the same thing in XML (this may require setting an xmlTag and/or providing additional levels of containment to match the target server's structure - XML often requires more boilerplate than JSON):
        <requestTemplate>
          <record>
             <id>$values.pk</id>
             <FOO>$values.foo</FOO>
             <BAR>#if($values.bar)1#else0#end</BAR>
             <def>$values.abc</def>
             <isTBD>#if($values.abc == "TBD")1#else0#end</isTBD>
          </record>
        </requestTemplate>
      Templates can also be used to flatten structured data. Say we have this incoming record:
        {
             customerId: 2770
             name: "Snape Networks Inc",
             contacts: [
                 {name: "Isaura Lopez", title: "IT Manager"},
                 {name: "Ngozi Okoduwa", title: "VP Procurement"},
                 {name: "Andrew Jenkins", title: "Buyer"}
             ]
        }
      The target REST server does not support a list of contact details, just two contact names:
        {
             Id: 2770, 
             Name: "Snape Networks Inc",  
             Contact1: "Isaura Lopez",
             Contact2: "Ngozi Okoduwa"
        }
      This transformation could be achieved with the following template:
        <requestTemplate>
        { 
             Id: $values.customerId,
             Name: "$values.name",
             #foreach ($contact in $values.contacts)
               #if ($foreach.count > 2) #break  #end
             Contact$foreach.count: "$contact.name"
             #end
        }
        </requestTemplate>
      See the Velocity User Guide for full details of Velocity's templating features

      Default value is null

      See Also:
    • logSlowAdd

      public Integer logSlowAdd
      Allows you to specify "add" operation SQL query execution time threshold in milliseconds, which if exceeded query is identified as "slow" and may be logged under specific logging category.

      See logSlowSQL for more details.

      Default value is null

    • callbackParam

      public String callbackParam
      Applies only to dataFormat: "json" and dataTransport:"scriptInclude". Specifies the name of the query parameter that tells your JSON service what function to call as part of the response.

      Default value is "callback"

      See Also:
    • auditTimeStampFieldName

      public String auditTimeStampFieldName
      For DataSources with auditing enabled, specifies the field name used to store the timestamp when the operation was performed (in a field of type "datetime"). If empty string is specified as the field name, the audit DataSource will not store this field.

      Default value is "audit_changeTime"

    • mockDataCriteria

      public Criteria mockDataCriteria
      When mockMode is enabled, criteria to use in an initial "fetch" DSRequest to retrieve sample data.

      Default value is null

    • progressiveLoadingThreshold

      public int progressiveLoadingThreshold
      Indicates the dataset size that will cause Smart GWT Server to automatically switch into progressive loading mode for this DataSource. To prevent automatic switching to progressive loading, set this property to -1. This may also be prevented on a per-request basis by setting DSRequest.progressiveLoading to false.

      Default value is 200000

      See Also:
    • showLocalFieldsOnly

      public Boolean showLocalFieldsOnly
      For a DataSource that inherits fields from another DataSource (via inheritsFrom), indicates that only the fields listed in this DataSource should be shown. All other inherited parent fields will be marked "hidden:true".

      Default value is null

    • serviceNamespace

      public String serviceNamespace
      For an XML DataSource, URN of the WebService to use to invoke operations. This URN comes from the "targetNamespace" attribute of the <wsdl:definitions> element in a WSDL (Web Service Description Language) document, and serves as the unique identifier of the service.

      Having loaded a WebService using XMLTools.loadWSDL(), setting serviceNamespace combined with specifying operationBindings that set OperationBinding.wsOperation will cause a DataSource to invoke web service operations to fulfill DataSource requests (DSRequests).

      Setting serviceNamespace also defaults dataURL to the service's location, dataFormat to "xml" and dataProtocol to "soap".

      Default value is null

      See Also:
    • fileVersionField

      public String fileVersionField
      The native field name used by this DataSource on the server to represent fileVersion for FileSource Operations.

      Automatic file versioning is configured by the presence of this property: if you want automatic versioning for a FileSource DataSource, it is sufficient to simply define a fileVersionField. When automatic versioning is on:

      • Calls to saveFile() will save a new version of the file, retaining previous versions up to the maximum configured by maxFileVersions; when that maximum is reached, the oldest version is overwritten
      • The getFile() API always returns the most recent version
      • The listFiles() API only includes the most recent version of any file
      • You can view and retrieve earlier versions of a file with the listFileVersions() and getFileVersion() APIs. Note that retrieving a previous version of a file and then calling saveFile() goes through the normal process of saving a new version

      The fileVersion field is expected to be of type "datetime", and automatic versioning will not work otherwise. Note, to minimize the possibility of version timestamp collisions, we recommend that fileVersion fields specify storeMilliseconds: true.

      If the fileVersionField is not configured, no automatic file versioning will be done.

      Default value is null

      See Also:
    • unionFields

      public String unionFields
      Only applicable to "union" dataSources, this is a comma-separated list of the names of the dataSource fields that make up the union. This property is optional; if it is not supplied, Smart GWT Server will derive a list of fields from the member dataSources (see unionOf), where name and data type match. See defaultUnionFieldsStrategy for more details.

      Note, this setting is only useful for fields that are named the same on the member dataSources. For a more flexible and powerful way to define the unioned fields, that does not rely on field names being the same in member dataSources, you can use field-level unionOf definitions.

      Default value is null

      See Also:
    • allowTemplateReferences

      public DataSourceTemplateReferenceMode allowTemplateReferences
      Indicates the mode to use for resolving templated references in this DataSource's configuration file. A "templated reference" is a reference that makes use of a special token to indicate it is a placeholder that should be replaced with a value derived from the server.properties file - in "all" mode, other relatively static context available to the Velocity engine can also be used - during DataSource initialization. You can place a templated reference inside any string property, anywhere in your .ds.xml file.

      Templated references are a simple, lightweight way to introduce a level of dynamism to DataSource configuration. For example, you could use a templated reference to dynamically set the dbName

      If you need more extensive support for dynamic DataSources than can be provided with templated references, you can completely customize configuration on-the-fly using a Dynamic DataSource Generator - see the server-side API com.isomorphic.datasource.DataSource.addDynamicDSGenerator()

      The default value for this property is generally "configOnly" - any DataSource that extends BasicDataSource, including the built-in SQL, Hibernate and JPA DataSource implementations, will default to this setting. You can override this across the board by adding a setting like this to your server.properties file:

           dataSource.default.allowTemplateReferences: none
           # or dataSource.default.allowTemplateReferences: all
        
      For RestConnector DataSources (serverTypes "rest" and "odata"), the default value for this property is "all", in keeping with that DataSource's extensive support for Velocity templating.

      Default value is See Below

      See Also:
    • noNullUpdates

      public boolean noNullUpdates
      When true, indicates that fields in this DataSource will never be positively updated to the null value; they may arrive at null values by being omitted, but we will not send actual null values in update requests. When false (the default), null is not treated in any special way.

      Setting this value causes null-assigned fields to be replaced with the field's nullReplacementValue, if one is declared. If no nullReplacementValue is declared for the field, the null assignment is replaced with the DataSource's nullStringValue, nullIntegerValue, nullFloatValue or nullDateValue, depending on the field type.

      For "add" operations, setting omitNullDefaultsOnAdd causes null-valued fields to be removed from the request entirely, rather than replaced with default values as described above.

      Default value is false

    • transformMultipleFields

      public Boolean transformMultipleFields
      If set to "false", transformation of values for multiple:true fields, normally controlled by DataSourceField.multipleStorage, is instead disabled for this entire DataSource.

      Default value is null

    • enumConstantProperty

      public String enumConstantProperty
      The name of the property this DataSource uses for constant name when translating Java enumerated types to and from Javascript, if the EnumTranslateStrategy is set to "bean". Defaults to "_constant" if not set.

      This property is only applicable if you are using the Smart GWT server

      Default value is null

    • transformRawResponseScript

      public String transformRawResponseScript
      Applicable to server-side REST DataSources only

      A scriptlet to be executed on the server after data has been fetched from the REST service, but before it is processed through templating. The intention is that this scriptlet transforms the response data in some way, before that transformed data is passed through templating and further downstream transformation steps such as record transformation. If your use case does not involve templating, there is no difference between putting your transformation logic in this script, or putting it in a transformResponseScript - they are merely pre-templating and post-templating transformation opportunities, so if no templating is involved, they are conceptually the same thing. Accordingly, exactly the same variables are available to the transformRawResponseScript as to the transformResponseScript - see that property's documentation for details.

      Note, if you prefer a Java solution rather than placing scripts in your .ds.xml files, you can instead extend the Java RestConnector class and override its transformRawResponse() method. If you both override the Java method and provide a transformRawResponseScript, the Java method runs first and any transformations it makes will be visible to the script. See serverConstructor for details of how to use a custom class to implement a DataSource server-side.

      Default value is null

      See Also:
    • recordName

      public String recordName
      Provides a default value for OperationBinding.recordName.

      Default value is null

      See Also:
    • clientRequestMaxRows

      public int clientRequestMaxRows
      Applies to SQL DataSources only.

      If this attribute is set to a non-negative value, it acts as a hard safety limit for client-initiated DSRequests for "all rows". If the server encounters more rows in a response than this safety limit, it will abort immediately with an Exception.

      This attribute is not meant to be a regular application facility. As mentioned above, it is a safety mechanism, intended primarily to prevent bugs in development from causing long delays or server Out Of Memory crashes by unintentionally issuing requests that fetch huge numbers of rows (eg, by failing to specify filter criteria).

      Note the following:

      • This limit only has an effect when "all rows" are requested - ie, when the request does not specify an endRow, or specifies endRow:-1. If you specify a non-negative endRow, it will be honored even if that means we need to return more than clientRequestMaxRows records
      • If you need to handle very large datasets in a manageable way, consider using ProgressiveLoading to stream the data progressively
      • Note that this attribute applies to client-initiated requests only. If you want to provide a hard safety limit for all fetches, including requests initiated on the server, set a requestMaxRows instead. If both properties are specified, clientRequestMaxRows wins for client-initiated requests, so it is possible to configure different limits for client- and server-initiated requests

      To set a default clientRequestMaxRows that will apply to all dataSources that do not specify the attribute, add the following to your server.properties file:

            # Fail with an error if we try to fetch more than 10000 rows in a client request
            sql.clientRequestMaxRows: 10000
        

      Default value is -1

      See Also:
    • dataTransport

      public RPCTransport dataTransport
      Transport to use for all operations on this DataSource. Defaults to defaultTransport. This would typically only be set to enable "scriptInclude" transport for contacting JSON web services hosted on servers other than the origin server.

      When using the "scriptInclude" transport, be sure to set callbackParam or OperationBinding.callbackParam to match the name of the query parameter name expected by your JSON service provider.

      Default value is RPCManager.defaultTransport

      See Also:
    • dropExtraFields

      public Boolean dropExtraFields
      Indicates that for server responses, for any data being interpreted as DataSource records, only data that corresponds to declared fields should be retained; any extra fields should be discarded.

      For JSON data, this means extra properties in selected objects are dropped.

      By default, for DMI DSResponses, DSResponse.data is filtered on the server to just the set of fields defined on the DataSource (see the overview in DMI).

      This type of filtering can also be enabled for non-DMI DSResponses. By default it is enabled for Hibernate and JPA datasources to avoid unintentional lazy loading too much of a data model. For the rest of datasources this is disabled by default.

      Explicitly setting this property to false disables (or to true enables) this filtering for this DataSource only. This setting overrides the configuration in server.properties. This setting can be overridden by ServerObject.dropExtraFields.

      Default value is null

      See Also:
    • dbName

      public String dbName
      For DataSources using the Smart GWT SQL engine for persistence, which database configuration to use. Database configurations can be created using the Admin Console. If unset, the default database configuration is used (which is also settable using the "Databases" tab).

      Default value is null

      See Also:
    • quoteTableName

      public Boolean quoteTableName
      For SQL DataSources, tells the framework whether to surround the associated table name with quotation marks whenever it appears in generated queries. This is only required if you have to connect to a table with a name that is in breach of your database product's naming conventions. For example, some products (eg, Oracle) internally convert all unquoted references to upper case, so if you create a table called myTest, the database actually calls it MYTEST unless you quoted the name in the create command, like this:

        CREATE TABLE "myTest"

      If you do quote the name like this, or if you have to connect to a legacy table that has been named in this way, then you must set this property to tell the SQL engine that it must quote any references to this table name (this requirement depends on the database in use - as noted below, some are not affected by this problem). If you do not, you will see exceptions along the lines of "Table or view 'myTest' does not exist".

      Note, other database products (eg, Postgres) convert unquoted names to lower case, which leads to the same issues. Still others (eg, SQL Server) are not case sensitive and are not affected by this issue.

      Generally, we recommend that you avoid using this property unless you have a specific reason to do so. It is preferable to avoid the issue altogether by simply not quoting table names at creation time, if you are able to do so.

      Default value is null

      See Also:
    • fileLastModifiedField

      public String fileLastModifiedField
      The native field name used by this DataSource on the server to represent fileLastModified for FileSource Operations.

      If the fileLastModifiedField is not configured, then a field named "fileLastModified" will be used, if it exists. Otherwise, the server will look for a field with a "modifierTimestamp" type. If that is not found, the DataSource will not keep track of the last modified date.

      Default value is null

      See Also:
    • descriptionField

      public String descriptionField
      Name of the field that has a long description of the record, or has the primary text data value for a record that represents an email message, SMS, log or similar.

      For example, for a DataSource representing employees, a field containing the employee's "bio" might be a good choice, or for an email message, the message body.

      If descriptionField is unset, it defaults to any field named "description" or "desc" in the record, or the first long text field (greater than 255 characters) in the record, or null if no such field exists.

      Default value is null

      See Also:
    • serverObject

      public ServerObject serverObject
      For Direct Method Invocation (DMI) binding, declares the ServerObject to use as the default target for all operationBindings. Specifying this attribute in an XML DataSource stored on the server enables DMI for this DataSource.

      Note that if a dataSource configuration has both a <script> block and a specified serverObject for some operation, the script block will be executed, and the serverObject ignored.

      Default value is null

      See Also:
    • logSlowCustom

      public Integer logSlowCustom
      Allows you to specify "custom" operation SQL query execution time threshold in milliseconds, which if exceeded query is identified as "slow" and may be logged under specific logging category.

      See logSlowSQL for more details.

      Default value is null

    • autoDeriveTitles

      public boolean autoDeriveTitles
      If set, titles are automatically derived from field.name for any field that does not have a field.title and is not marked hidden:true, by calling the method getAutoTitle().

      Default value is true

    • sparseUpdates

      public boolean sparseUpdates
      When true, indicates that any updates for this DataSource include only those fields that have actually changed (and primaryKey fields); when false (the default), all field values are included in updates, whether they have changed or not

      Default value is false

    • strictSQLFiltering

      public Boolean strictSQLFiltering
      If set to true, both client and server-side advanced filtering used by Smart GWT will follow SQL99 behavior for dealing with NULL values, which is often counter-intuitive to users. Specifically, when a field has NULL value, all of the following expressions are false:
           field == "someValue"  (normally false)
           field != "someValue"  (normally true)
           not (field == "someValue")   (normally true)
           not (field != "someValue")   (normally false)
        
      This property can be overridden per-query by specifying strictSQLFiltering directly as a property on the AdvancedCriteria.

      NOTE: On the server side, this property is only applicable if you are using the SQL DataSource; the other built-in types (Hibernate and JPA/JPA2) do not offer this mode.

      Default value is false

    • progressiveLoading

      public Boolean progressiveLoading
      If true, causes Smart GWT Server to use the "progressive loading" pattern for fetches on this dataSource, as described in the Paging and total dataset length section of the ResultSet documentation. Essentially, this means that we avoid issuing a row count query and instead advertise total rows as being slightly more than the number of rows we have already read (see endGap). This allows users to load more of a dataset by scrolling past the end of the currently-loaded rows, but it prevents them from scrolling directly to the end of the dataset.

      Generally, progressive loading is appropriate when you have to deal with very large datasets. Note that by default, a dataSource will switch into progressive loading mode automatically when it detects that it is dealing with a dataset beyond a certain size - see progressiveLoadingThreshold.

      This setting can be overridden for individual fetch operations with the OperationBinding.progressiveLoading property, and also at the level of the individual DSRequest. You can also specify progressiveLoading on DataBoundComponents and certain types of FormItem - SelectItem and ComboBoxItem.

      Currently, this property only applies to users of the built-in SQLDataSource, but you could use it in custom DataSource implementations to trigger the server behavior described in the ResultSet documentation linked to above.

      Default value is null

      See Also:
    • configBean

      public String configBean
      For DataSources of serverType "hibernate", the name of a Spring bean to query to obtain Hibernate Configuration for this particular DataSource. Note that this is intended for DataSource-specific configuration overrides for unusual circumstances, such as a DataSource whose physical data store is a completely different database to that used by other DataSources. See the Integration with Hibernate article for more information

      Default value is null

      See Also:
    • schemaBean

      public String schemaBean
      For DataSources that specify autoDeriveSchema, this property indicates the name of the Spring bean, Hibernate mapping or fully-qualified Java class to use as parent schema.

      Default value is null

      See Also:
    • auditDSConstructor

      public String auditDSConstructor
      For DataSources with auditing enabled, optionally specifies the serverConstructor for the automatically generated audit DataSource. The default is to use the same serverConstructor as the DataSource where audit="true" was declared.

      This property is primarily intended to allow the use of SQLDataSource (serverType:"sql") as an audit DataSource for a DataSource that might be of another type. For example, you might have a DataSource that implements all CRUD operations via Java logic in DMI declaration methods, and so doesn't provide generic storage; by using SQLDataSource as the type of your audit DataSource, you don't need to implement your own scheme for storing and querying audit data, and the necessary audit tables will be automatically generated in the database.

      Similarly, even if you do use a reusable DataSource type such as the built-in JPADataSource, using SQLDataSource for audit DataSources means there's no need to write a JPA bean just to achieve storage of an audit trail.

      To simplify this intended usage, the string "sql" is allowed for auditDSConstructor as a means of specifying that the built-in SQLDataSource class should be used. For any other type, use the fully qualified Java classname, as is normal for serverConstructor.

      Default value is null

    • cacheSyncTiming

      public CacheSyncTiming cacheSyncTiming
      The cacheSyncTiming to use for operations on this DataSource. Can be overridden at the operation and DSRequest levels. The default value of null is the same as specifying "immediate".

      Note that cacheSyncTiming is not applicable to many common types of request, and is simply ignored for these request types. See the CacheSyncTiming type documentation for more details.

      Default value is null

      See Also:
    • templateConfigToken

      public String templateConfigToken
      The special token to look for when trying to resolve template references when the template references mode is "configOnly". In this mode, the system looks for this token, followed by the strict pattern ["property_name"] (single quotes are allowed instead of double quotes, but that is the only supoprted variation).

      By default the template token is "$config" because that is the token used for Velocity templating, and is thus familiar. If you need to use a different token - perhaps because the "$" symbol already has special significance in your system - you can do so by setting this property. For example, if you set templateConfigToken="#conf", your templated references would look like this:

           someProperty="#conf['my.property.name']"
        

      Default value is "$config"

      See Also:
    • enumOrdinalProperty

      public String enumOrdinalProperty
      The name of the property this DataSource uses for ordinal number when translating Java enumerated types to and from Javascript, if the EnumTranslateStrategy is set to "bean". Defaults to "_ordinal" if not set.

      This property is only applicable if you are using the Smart GWT server

      Default value is null

    • csvDelimiter

      public String csvDelimiter
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.csvDelimiter).

      This property specifies the character to recognise as the delimiter when parsing REST service responses where the responseFormat is "csv"

      Default value is ","

      See Also:
    • useAnsiJoins

      public Boolean useAnsiJoins
      For DataSources using the Smart GWT SQL engine for persistence, whether to use ANSI-style joins (ie, joins implemented with JOIN directives in the table clause, as opposed to additional join expressions in the where clause). The default value of null has the same meaning as setting this flag to false.

      Note, outer joins (see joinType) do not work with all supported database products unless you use ANSI joins. Other than that, the join strategies are equivalent.

      If you wish to switch on ANSI-style joins for every DataSource, without the need to manually set this property on all of them, set server.properties flag sql.useAnsiJoins to true.

      Default value is null

      See Also:
    • requiresCompleteRESTResponse

      public Boolean requiresCompleteRESTResponse
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.requiresCompleteRESTResponse).

      If true, this flag indicates that the DataSource requires access to the entire REST response, not just the data part configured by the recordXPath. This is often the case for REST services that send back useful metadata alongside the data itself. For example, some REST services limit the number of records that can be retrieved in one service call, and will return the total number of records actually available in some kind of metadata property.

      This is useful information that a DataSource can use in custom logic to return a richer response to the client. This custom logic can be written in Java in a DMI method or custom DataSource implementation, or in any JSR223 scripting language, as a server script or response transformation script.

      When this flag is true, the complete REST response is available to your server-side custom code as follows:

      • For a DMI or custom dataSource implementation that invokes the RestConnector built-in logic by calling execute() on either the DSRequest or the DataSource itself (this would be a super() call for a custom impl), the DSResponse returned to your code will contain a property called "completeData"
      • For a <script> or <transformResponseScript>, a variable called "completeData" will be available to your script (completeData is also available as dsResponse.completeData)
      Note, for requests that came originally from a remote client, the "completeData" property of the DSResponse will be serialized and returned to the client. If you do not want this, your custom server code should remove that property from the DSResponse when it has finished with it.

      Default value is false

      See Also:
    • showPrompt

      public Boolean showPrompt
      Whether RPCRequests sent by this DataSource should enable RPCRequest.showPrompt in order to block user interactions until the request completes.

      DataSource requests default to blocking UI interaction because, very often, if the user continues to interact with an application that is waiting for a server response, some kind of invalid or ambiguous situation can arise.

      Examples include pressing a "Save" button a second time before the first save completes, making further edits while edits are still being saved, or trying to initiate editing on a grid that hasn't loaded data.

      Defaulting to blocking the UI prevents these bad interactions, or alternatively, avoids the developer having to write repetitive code to block invalid interactions on every screen.

      If an operation should ever be non-blocking, methods that initiate DataSource requests (such as fetchData()) will generally have a requestProperties argument allowing showPrompt to be set to false for a specific request.

      Default value is true

    • defaultTextMatchStyle

      public TextMatchStyle defaultTextMatchStyle
      The default textMatchStyle to use for DSRequests that do not explicitly state a textMatchStyle. Note, however, that DSRequests issued by ListGrids and other components will generally have a setting for textMatchStyle on the component itself (see ListGrid.autoFetchTextMatchStyle, for example).

      Default value is "exact"

      See Also:
    • mockDataRows

      public Integer mockDataRows
      When mockMode is enabled, number of rows of data to retrieve via an initial "fetch" DSRequest, for use as sample data. Set to null to retrieve all available rows.

      Default value is 75

    • autoConvertRelativeDates

      public Boolean autoConvertRelativeDates
      Whether to convert relative date values to concrete date values before sending to the server. Default value is true, which means that the server does not need to understand how to filter using relative dates - it receives all date values as absolute dates.

      If the server would receive relative date values from the client, by default they would be unchanged in DMI and automatically converted during the request execution. This may be changed via server.properties setting datasources.autoConvertRelativeDates which can be set to the following values:

      • postDMI - the default value described above
      • preDMI - relative date values will be converted to absolute date values right away, so they will be already converted in DMI
      • disabled - relative date values will not be automatically converted, so it must be done completely manually or by calling the DSRequest.convertRelativeDates() server-side API.
      Normally there is no need to convert relative dates on the server, this is done by default on the client before the request is sent to the server. The primary purpose for converting relative dates on the server is when there is a need to store and use relative dates at a later point such as in an automated job without any involvement from the client. See more details in the javadoc for DataSource.convertRelativeDates(Criterion) server-side API.

      Default value is true

    • title

      public String title
      User-visible name for this DataSource.

      For example, for the supplyItem DataSource, "Supply Item".

      If is unset, getAutoTitle() method will be used with dataSource.ID. value in order to derive a default value for the title.

      For example "employee" ID will be derived to "Employee", "team_member" ID will be derived to "Team Member".

      Default value is dataSource.ID

    • globalNamespaces

      public Map globalNamespaces
      Namespaces definitions to add to the root element of outbound XML messages sent to a web service, as a mapping from namespace prefix to namespace URI.

      The default value is:

          globalNamespaces : {
             xsi: "http://www.w3.org/2001/XMLSchema-instance",
             xsd: "http://www.w3.org/2001/XMLSchema"
          },
        
      This default value allows the use of the xsi:type and xsi:nil attributes without further declarations.

      Note that some web services will only accept specific revisions of the XML Schema URI. If xsi-namespaced attributes seem to be ignored by an older webservice, try the URI "http://www.w3.org/1999/XMLSchema-instance" instead.

      Default value is ...

    • sendExtraFields

      public Boolean sendExtraFields
      Analogous to dropExtraFields, for data sent to the server. Setting this attribute to false ensures that for any records in the data object, only fields that correspond to declared dataSource fields will be present on the dsRequest data object passed to transformRequest() and ultimately sent to the server.

      Default value is true

      See Also:
    • implicitCriteria

      public Criteria implicitCriteria
      Criteria that are implicitly applied to all fetches made through this DataSource. These criteria are never shown to or edited by the user and are cumulative with any other criteria provided on the DSRequest.

      For example, a DataSource might *always* implicitly limit its fetch results to records owned by the current user's department. Components and ResultSets requesting data from the DataSource can then apply further implicitCriteria of their own, separately from their regular, user-editable criteria.

      For instance, a grid bound to this dataSource might be further limited to implicitly show only the subset of records created by the current user. See DataBoundComponent.implicitCriteria and ResultSet.implicitCriteria for more on these localized options.

      Note that, while implicitCriteria can be declared in a server DataSource file using Component XML, it is an entirely client-side feature, added by client-side components. So it does not affect server-side requests, and will not be added to client-side requests that don't come from a Smart GWT UI (eg RestHandler).

      Default value is null

    • pluralTitle

      public String pluralTitle
      User-visible plural name for this DataSource.

      For example, for the supplyItem DataSource, "Supply Items".

      Defaults to dataSource.title + "s".

      Default value is dataSource.ID

    • tableName

      public String tableName
      For DataSources using the Smart GWT SQL engine for persistence, what database table name to use. The default is to use the DataSource ID as the table name.

      Default value is null

      See Also:
    • unionOf

      public String unionOf
      Only applicable to "union" dataSources, this is a comma-separated list of the names of the member dataSources that make up the union. If all the named dataSources are SQL DataSources, or union dataSources whose members are SQL dataSurces, the union will be implemented purely in SQL, which is the most efficient thing to do.

      Nesting Union DataSources
      It is valid for union DataSources to be nested inside other union DataSources; nesting is valid to any depth. If you are nesting union dataSources, bear in mind the following considerations:

      • If you are renaming fields from underlying SQL DataSources in a union DataSource, and you then include that union DataSource as a nested element of another union DataSource, it is the renamed fields that should be referenced in the outer union DataSource. For example, if you have union DataSource "union1" that specifes a unioned field like this:
             <field name="c" unionOf="sqlDS1.a, sqlDS2.b" />
        If you want to then include dataSource "union1" as a member dataSource of another union dataSource, "union2", you need to refer to the field as "c", not "a" or "b":
             <field name="myField" unionOf="union1.c, sqlDS3.xyz" />
        Smart GWT will follow the chain of field renames to ensure that the appropriate SQL-level fields are unioned together, and renamed consistently
      • It is not an error to specify the same dataSource as a member of a union dataSource multiple times, but it is also not correct and may have performance implications. If you configure duplicate member dataSources directly, eg:
             <DataSource serverType="union" unionOf="sqlDS1,sqlDS2,sqlDS1">
        Smart GWT will simply remove the duplicate entries. However, with nested union dataSources, it becomes possible to indirectly include the same member dataSource more than once. Eg:
             <DataSource ID="bigUnion" serverType="union" unionOf="union1,union2">
        and
             <DataSource ID="union1" serverType="union" unionOf="sqlDS1,sqlDS2">
        and
             <DataSource ID="union2" serverType="union" unionOf="sqlDS2,sqlDS3">
        You can see, dataSource "sqlDS2" is going to be included twice. This will result in the union DataSource generating SQL that fetches the rows from "sqlDS2" twice, but because of the way the SQL "UNION" keyword works, all the duplicate rows will be dropped. So the end result will be correct, but the system will have wasted time fetching and then dropping duplicate rows.
      • The ability to nest union dataSources within other union dataSources introduces a whole category of extra considerations. For example, you need to make sure you do not set up self references or looping structures, and as mentioned above, field renaming becomes potentially more involved

      Default value is null

      See Also:
    • recordXPath

      public XPathExpression recordXPath
      See OperationBinding.recordXPath. recordXPath can be specified directly on the DataSource for a simple read-only DataSource only capable of "fetch" operations, on clientOnly DataSources using String, or on RestConnector dataSources

      Default value is null

      See Also:
    • useSubselectForRowCount

      public Boolean useSubselectForRowCount
      This property is only applicable to SQL DataSources, and only for operations that express a customSQL clause. In these circumstances, we generally switch off paging because we are unable to generate a "row count" query that tells the framework the size of the complete, unpaged resultset.

      The useSubselectForRowCount flag causes the framework to derive a rowcount query by simply wrapping the entire customSQL clause in a subselect, like so:
          SELECT COUNT(*) FROM ({customSQL clause here})

      However, this is not guaranteed to give good results. Because the customSQL clause can contain anything that you can write in SQL, running it inside a subselect in order to count the rows might not work, might have unintended side-effects (if, for example, it is a stored procedure call that performs updates as part of its function), or it might just be a bad idea - for example, if the customSQL query is slow-running, you'll make it twice as slow with this flag, simply because you'll be running it twice. We recommend using this flag with care.

      NOTE: This setting can be overridden per-operation - see OperationBinding.useSubselectForRowCount. You can also set a global default for this setting, so you don't have to specify it in every dataSource - define useSubselectForRowCount as true in your server.properties file.

      Default value is null

    • fileContentsField

      public String fileContentsField
      The native field name used by this DataSource on the server to represent the fileContents for FileSource Operations.

      If the fileContentsField is not configured, then a field named "fileContents" or "contents" will be used, if it exists. If not found, the longest text field which is not the fileNameField, fileTypeField or fileFormatField will be used.

      Note that the only method which will actually return the fileContents is getFile() -- the other FileSource methods omit the fileContents for the sake of efficiency.

      Default value is null

      See Also:
    • serverType

      public DSServerType serverType
      For a DataSource stored in .xml format on the Smart GWT server, indicates what server-side connector to use to execute requests, that is, what happens if you call dsRequest.execute() in server code.

      Default value is "generic"

      See Also:
    • requestMaxRows

      public int requestMaxRows
      Applies to SQL DataSources only.

      The same as clientRequestMaxRows, but applies to all requests, not just client-initiated ones. See the documentation for clientRequestMaxRows for details. To reiterate the warning given for that property, this is a safety limit that results in an Exception being thrown: it is not intended to be used as a regular application facility

      To set a default requestMaxRows that will apply to all dataSources that do not specify the attribute, add the following to your server.properties file:

            # Fail with an error if we try to fetch more than 50000 rows in any request
            sql.clientRequestMaxRows: 50000
        

      Default value is -1

      See Also:
    • requestFormat

      public RESTRequestFormat requestFormat
      For a RestConnector DataSource, the request format to use when contacting the remote REST service. Can be overridden at operationBinding level, see OperationBinding.requestFormat. Note, if requestFormat is not specified at either the DataSource or OperationBinding level, the request will be rejected.

      Default value is null

      See Also:
    • preventHTTPCaching

      public Boolean preventHTTPCaching
      If set, the DataSource will ensure that it never uses a cached HTTP response, even if the server marks the response as cacheable.

      Note that this does not disable caching at higher levels in the framework, for example, the caching performed by ResultSet.

      Default value is true

    • canMultiSort

      public boolean canMultiSort
      When true, indicates that this DataSource supports multi-level sorting.

      Default value is true

    • requiresAuthentication

      public Boolean requiresAuthentication
      Whether a user must be authenticated in order to access this DataSource. This establishes a default for the DataSource as a whole; individual operationBindings within the DataSource may still override this setting by explicitly setting OperationBinding.requiresAuthentication.

      Whether the user is authenticated is determined by calling httpServletRequest.getRemoteUser(), hence works with both simple J2EE security (realms and form-based authentication) and JAAS (Java Authentication & Authorization Service).

      If you wish to use an authentication scheme that does not make use of the servlet API's standards, Smart GWT Server also implements the setAuthenticated method on RPCManager. You can use this API to tell Smart GWT that all the requests in the queue currently being processed are associated with an authenticated user; in this case, Smart GWT will not attempt to authenticate the user via httpServletRequest.getRemoteUser()

      You can set the default value for this property via setting "authentication.defaultRequired" in server.properties. This allows you to, for example, cause all DataSources to require authentication for all operations by default.

      Note that setting this property does not automatically cause an authentication mechanism to appear - you still need to separately configure an authentication system. Likewise, setting requiresAuthentication="false" does not automatically allow users to bypass your authentication mechanism - you need to set up a URL that will accept DSRequests and process them similar to the default "IDACall" servlet, and which is not protected by the authentication system. See Deploying Smart GWT for details on the IDACall servlet.

      Default value is null

      See Also:
    • logSlowFetch

      public Integer logSlowFetch
      Allows you to specify "fetch" operation SQL query execution time threshold in milliseconds, which if exceeded query is identified as "slow" and may be logged under specific logging category.

      See logSlowSQL for more details.

      Default value is null

    • allowRelatedFieldsInCriteria

      public Boolean allowRelatedFieldsInCriteria
      This property indicates whether this DataSource supports referencees to related fields in criteria, either using qualified field names, or subqueries. See the subquery filtering overview for details. Note, only AdvancedCriteria can contain related fields.

      All server-side DataSource implementations provide a level of support for related fields in criteria, but only SQL DataSources can handle cases that require conditions to be considered simultaneously with a join to the outer dataSource.

      Default value is null

      See Also:
    • cacheSyncStrategy

      public CacheSyncStrategy cacheSyncStrategy
      The cacheSyncStrategy to use for operations on this DataSource. Can be overridden at the operation and DSRequest levels.

      If cacheSyncStrategy is not explicitly set for a DataSource, we will use the default strategy for this DataSource type. The defaults as shipped, are:

      Server TypeStrategy
      sqlrequestValuesPlusSequences
      hibernaterefetch
      jpa/jpa1refetch
      restresponseValues
      genericrefetch

      You can override these default strategies by adding a "default.{server-type}.cache.sync.strategy" setting to your server.properties file, like these examples:

           default.sql.cache.sync.strategy: refetch
           default.rest.cache.sync.strategy: none
        
      Note that global cacheSyncStrategy settings are defaults only; Smart GWT Server will override them intelligently in some circumstances. See the "CacheSyncStrategy" section of the cache synchronization overview for details of when and why we do this.

      Default value is null

      See Also:
    • titleField

      public String titleField
      Best field to use for a user-visible title for an individual record from this dataSource.

      For example, for a DataSource of employees, a "full name" field would probably most clearly label an employee record.

      If not explicitly set, the titleField is determined by looking for fields named "name", "dataSourceIdName", "title", "dataSourceIdTitle", "label", "dataSourceIdLabel", "id" and "dataSourceIdId". For example, for a DataSource with ID "customer", a field called customerName would be found if there were no "name" field. Search is case insensitive, and an underscore is allowed after dataSourceId (so that, for example, "CUSTOMER_NAME" would also be found and preferred).

      For purposes of this search, any trailing numerals in the DataSource ID are discarded, so a DataSource with ID "office2" will search for title fields as if the ID were just "office".

      If no field is found that matches any of the names above, then the first field is designated as the titleField.

      Default value is see below

      See Also:
    • defaultUnionFieldsStrategy

      public UnionFieldsStrategy defaultUnionFieldsStrategy
      Only applicable to "union" dataSources. Determines the strategy we adopt when automatically deriving a list of fields when no explicit unionFields setting is provided. Note that explicit field declarations that include unionOf definitions take precedence over whatever defaultUnionFieldsStrategy is defined. As an example of how this property affects field derivation, assume our unionDataSource specifies three member dataSources in its unionOf setting:
      • ds1 has fields f1 (integer), f2 (text) and f3 (boolean)
      • ds2 has fields f1 (integer), f2 (text) and f4 (datetime)
      • ds3 has fields f1 (integer), f2 (float), f4 (datetime) and f5 (text)
      Given this, the different unionFieldsStrategy settings would derive the following fields:
      • intersect would give just f1 (f2 exists on every dataSource, but the data type is not the same in every case)
      • matching would derive f1 and f4 - again, f2 would be excluded because of the data type discrepancy. Records from ds1 would contribute a null value for f4
      • all would derive all fields except f2, excluded because of the data type discrepancy. Values for missing fields from any given dataSource would be null

      Default value is "matching"

      See Also:
    • multiInsertBatchSize

      public Integer multiInsertBatchSize
      For dataSources of serverType "sql" only, this property sets the multi-insert batch size for add requests on this dataSource. Only has an effect if the add request specifies a list of records as the data, and only if multiInsertStrategy is set to "multipleValues", either globally or at the DSRequest, OperationBinding, or DataSource level.

      The multi-insert batch size determines the maximum number of VALUES clauses Smart GWT Server will generate for a single INSERT operation. The optimum value for this setting depends on the underlying database, but typically larger batches yield better performance. if the number of records in the request exceeds the batch size, we break it up into however many batches are required. For example, if the batch size is 55 and the request contains 200 records, we would create 4 database INSERT operations - 3 with 55 VALUES clauses, and a final one with the remaining 35.

      Note that this setting (along with the other multi-insert properties, multiInsertStrategy and multiInsertNonMatchingStrategy) can be overridden at the operationBinding level and the dsRequest level. If you wish to configure multi-insert setting globally, see the documentation for multiInsertStrategy.

      Default value is null

      See Also:
    • allowAggregation

      public Boolean allowAggregation
      This property indicates whether this DataSource supports aggregation/summarization of field values using the summaryFunction mechanism. Of the built-in DataSource types, SQL, Hibernate and JPA DataSources allow aggregation, as do clientOnly DataSources.

      If you create a custom DataSource type that can handle the summary functions described in the overview linked above, you should set this flag true to indicate to Smart GWT that summary functions should be allowed - for example, in the FilterBuilder.

      Default value is null

      See Also:
    • csvQuoteCharacter

      public String csvQuoteCharacter
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.csvQuoteCharacter).

      This property specifies the character to treat a string quotation mechanism when parsing REST service responses where the responseFormat is "csv". Text in CSV responses does not have to quote string data, but many uses of CSV do quote strings because it allows text in the data to contain occurences of the csvDelimiter. For example, if the csvDelimiter is the default value of ",", quoting allows the following text, which would otherwise be ambiguous:

          1,ABC,"This is a text string, with a comma in the middle"

      Default value is '"'

      See Also:
    • creatorOverrides

      public Boolean creatorOverrides
      Indicates that declarative security rules are waived for rows that were created by the current user. Practically, this means that when a security check fails, instead of a security exception being thrown, we alter the criteria to ensure that the request can only return or affect rows that were created by the current authenticated user. This allows you to create security regimes whereby users can see and edit data they created, but have access to other users' data forbidden or limited.

      In order for this to work, we require two things:

      • The DataSource must specify a field of type "creator" - this field type is described on this page
      • The authentication regime in use must include the idea of a "current user". The authentication provided by the Servlet API is an example of such a regime.
      This setting can be overridden at operationBinding and field level, allowing extremely fine-grained control.

      Default value is null

      See Also:
    • resultSetClass

      public Map resultSetClass
      Class for ResultSets used by this datasource. If null, defaults to using ResultSet.

      This can be set to a custom subclass of ResultSet that, for example, hangs onto to extra information necessary for integration with web services.

      Default value is null

    • logSlowSQL

      public Integer logSlowSQL
      Allows you to specify SQL query execution time threshold in milliseconds, which if exceeded query is identified as "slow" and may be logged under specific logging category.

      This setting applies to all SQL queries, unless more specific thresholds are set using logSlowFetch, logSlowAdd, logSlowUpdate, logSlowRemove, logSlowCustom or even more specific affecting just the operationBinding it is configured in your DS XML as operationBinding.logSlowSQL.

      If none of the thresholds above are set, global sql.log.queriesSlowerThan server.properties SQL setting will be used.

      For the details on how to setup the logging part see the Special logging category: com.isomorphic.SLOW_SQL

      Default value is null

    • ownerIdNullAccess

      public NullAccessType ownerIdNullAccess
      If ownerIdField is in force, specifies the access that is allowed to records with a null ownerIdField. This property has no effect if ownerIdField is not specified.

      This property can be used in conjunction with ownerIdNullRole to create the concept of shared, or public, records. For example, if you set ownerIdNullRole to "administrator", any users with the "administrator" role will be allowed to write records with a null ownerIdField. If you also set ownerIdNullAccess to "view", all those records with a null owner will be viewable by all, in addition to their own records. We use this functionality with the Saved Searches feature, to enable precisely this: users can save their own searches, which are private to them, but administrators can also save searches with a null ownerIdField, which become standard, shared searches that appear to all users, in addition to their own private searches.

      Default value is null

      See Also:
    • iconField

      public String iconField
      Designates a field of type:"image" as the field to use when rendering a record as an image, for example, in a TileGrid.

      For example, for a DataSource of employees, a "photo" field of type "image" should be designated as the iconField.

      If not explicitly set, iconField looks for fields named "picture", "thumbnail", "icon", "image" and "img", in that order, and will use any of these fields as the iconField if it exists and has type "image".

      To avoid any field being used as the iconField, set iconField to null.

      Default value is see below

      See Also:
    • forceSort

      public String forceSort
      For DataSources of serverType "sql" only, indicates whether we should automatically add a sort field for paged fetches on this DataSource. If left unset, this property defaults to the one of the global values described in the defaultSortField documentation. Also note, this property can be overridden per-operationBinding.

      Note, the ability to set this property per-DataSource is only provided to allow for complete configurability in unusual cases. See the defaultSortField docs for details of why use of this property should be considered a red flag.

      Default value is null

      See Also:
    • addedAuditFields

      public DataSourceField[] addedAuditFields
      The list of extra manually managed fields that will be added to the fields of the Audit DataSource.

      This feature enables the storage of additional information in the Audit DataSource alongside the standard audit data. In order to do that the audited DataSource needs to declare auditDSConstructor referring custom serverConstructor, so that all requests to add data to the Audit DataSource could be intercepted allowing to make changes to the new records (obtained using DSRequest.getValues() server-side API). In this particular use case values for the addedAuditFields need to be provided.

      Example of an audited DataSource (schematically):

        <DataSource audit="true" auditDSConstructor="package.AuditDS">
        <fields>....</fields>
        <addedAuditFields>
             <addedAuditField name="altitude" type="float" />
             <addedAuditField name="longitude" type="float" />
             <addedAuditField name="logCorrelationId" type="text" />
        </addedAuditFields>
        </DataSource>
        
      An example implementation of AuditDS could be as follows:
        public class AuditDS extends SQLDataSource {
            public DSResponse executeAdd(DSRequest req) throws Exception {
                // populate additional fields
                Map values = req.getValues();
                values.put("altitude", 54.685);
                values.put("longitude", 25.286);
                values.put("logCorrelationId", "foobar");
                // execute "add" request
                return super.executeAdd(req);
            }
        }
        

      Default value is null

      See Also:
    • fields

      public DataSourceField[] fields
      The list of fields that compose records from this DataSource.

      Each DataSource field can have type, user-visible title, validators, and other metadata attached.

      Default value is null

      See Also:
    • ownerIdNullRole

      public String ownerIdNullRole
      If ownerIdField is in force, specifies a role that will allow the ownerIdField to take a null value. Any user that has that role is allowed to create client-initiated "add" and "update" operations that specify a null value for the ownerIdField, and the system will persist the null value rather than forcing in the currently authenticated user's user id as it normally would. If any client-initiated "add" or "update" request specifies any non-null value for the ownerIdField, the normal behavior of the system will reassert and the current user's user id will be forced into the ownerIdField. This allows authorised users (ie, those with the necessary role) to choose between saving public or private records, just by sending a null or non-null value for the ownerIdField.

      This property has no effect if ownerIdField is not specified.

      This property can be used in conjunction with ownerIdNullAccess to create the concept of shared, or public, records - see that property's documentation for an example.

      Default value is null

      See Also:
    • endGap

      public int endGap
      If we are loading progressively, indicates the number of extra records Smart GWT Server will advertise as being available, if it detects that there are more records to view (see lookAhead). This property has no effect if we are not progressive-loading.

      Note that when viewing DataSource data in a ListGrid, it is not recommended to have the endGap be larger than the ResultSet.resultSize. This can result in a situation where the entire range requested by the ResultSet is beyond the true end of the data set, with unpredictable results.

      Default value is 20

      See Also:
    • omitNullDefaultsOnAdd

      public boolean omitNullDefaultsOnAdd
      When true, and noNullUpdates is also true, indicates that "add" requests on this DataSource will have null-valued fields removed from the request entirely before it is sent to the server, as opposed to the default behavior of replacing such null values with defaults.

      Default value is false

      See Also:
    • nullStringValue

      public String nullStringValue
      If noNullUpdates is set, the value to use for any text field that has a null value assigned on an update operation, and does not specify an explicit nullReplacementValue.

      Default value is ""

      See Also:
      • noNullUpdates
      • com.smartgwt.client.docs.serverds.DataSourceField#nullReplacementValue
    • useOfflineStorage

      public Boolean useOfflineStorage
      Whether we store server responses for this DataSource into browser-based offline storage, and then use those stored responses at a later time if we are offline (ie, the application cannot connect to the server). Note that by default we do NOT use offline storage for a dataSource.

      Default value is null

    • params

      public Map params
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.params).

      A list of key/value pairs to pass as URL parameters when constructing the query part of the REST service URL. These can include templating, as described in the RestConnector overview. For example:

           <params>
             <customParam>$arbitraryVarInTemplateContext</customParam>
             <userIdentifier>$config['rest.service.user.identifier']</userIdentifier>
           </params>
      Bear in mind that requestFormat "params" implies by default a behavior that automatically adds parameters corresponding to values and/or criteria (see the RESTRequestFormat docs for details), so you would not normally use this property to specify parameters based on request values. Any params explicitly specified with the params property at DataSource of operationBinding level will be added to the auto-created list of params implied if requestFormat is "params".

      Default value is null

      See Also:
    • projectFileKey

      public String projectFileKey
      For DataSources with type projectFile, looks up the locations to use as projectFileLocations from the project's configuration (i.e. project.properties, server.properties etc.).

      For instance, to look up the value of project.datasources and use it for projectFileLocations, use "datasources" as the projectFileKey.

      If you specify both projectFileKey and projectFileLocations, then both with be used, with the projectFileLocations applied last.

      Default value is null

      See Also:
    • logSlowUpdate

      public Integer logSlowUpdate
      Allows you to specify "update" operation SQL query execution time threshold in milliseconds, which if exceeded query is identified as "slow" and may be logged under specific logging category.

      See logSlowSQL for more details.

      Default value is null

    • useSpringTransaction

      public Boolean useSpringTransaction
      This flag is part of the Automatic Transactions feature; it is only applicable in Power Edition and above.

      If true, causes all transactional operations on this DataSource to use the current Spring-managed transaction, if one exists. If there is no current Spring transaction to use at the time of execution, a server-side Exception is thrown. Note, a "transactional operation" is one that would have joined the Smart GWT shared transaction in the absence of Spring integration - see auotJoinTransactions.

      This feature is primarily intended for situations where you have DMI methods that make use of both Spring DAO operations and Smart GWT DSRequest operations, and you would like all of them to share the same transaction. An example of the primary intended use case:

          @Transactional(isolation=Isolation.READ_COMMITTED, propagation=Propagation.REQUIRED)
          public class WorldService {
       
            public DSResponse doSomeStuff(DSRequest dsReq, HttpServletRequest servletReq) 
            throws Exception 
            {
              ApplicationContext ac = (ApplicationContext)servletReq.getSession().getServletContext().getAttribute("applicationContext");
              WorldDao dao = (WorldDao)ac.getBean("worldDao");
              dao.insert(req.getValues());
              DSRequest other = new DSRequest("MyOtherDataSource", "add");
              // Set up the 'other' dsRequest with critiera, values, etc
              //  ...
       
              // This dsRequest execution will use the same transaction that the DAO operation
              // above used; if it fails, the DAO operation will be rolled back
              other.execute();
       
              return new DSResponse();
            }
          }
      Note: if you want to rollback an integrated Spring-managed transaction, you can use any of the normal Spring methods for transaction rollback:
      • Programmatically mark the transaction for rollback with the setRollbackOnly() API
      • Throw a RuntimeException, or
      • Throw an ordinary checked Exception. but configure Spring to rollback on that Exception. This can be done in the @Transactional annotation:
              @Transactional(isolation=Isolation.READ_COMMITTED, propagation=Propagation.REQUIRED, rollbackFor=MyRollbackException.class)
      Spring's exception-handling model is different from Smart GWT's, so care must be taken to get the correct error processing. If a transactional DSRequest fails, Smart GWT code will throw an ordinary checked Exception; but Spring will ignore that Exception. So you can either:
      • Wrap every DSRequest.execute() in a try/catch block. Catch Exception and throw a RuntimeException instead
      • Just use the "rollbackFor" annotation to make your transactional method rollback for all instances of Exception

      Note: Spring transaction integration is conceptually different from Smart GWT's built-in transaction feature, because Smart GWT transactions apply to a queue of DSRequests, whereas Spring transactions are scoped to a single method invocation. If you want to make a whole Smart GWT queue share a single Spring-managed transaction, you can wrap the processing of an entire queue in a call to a transactional Spring method. See the Using Spring Transactions with Smart GWT DMI section at the bottom of the Spring integration page for more details.

      You can set useSpringTransaction as the default setting for all dataSources for a given database provider by adding the property {dbName}.useSpringTransaction to your server.properties file. For example, Mysql.useSpringTransaction: true or hibernate.useSpringTransaction: true. You can set it as the default for all providers with a server.properties setting like this: useSpringTransaction: true. When useSpringTransaction is the default, you can switch it off for a specific dataSource by explicitly setting the flag to false for that DataSource.

      Finally, this setting can be overridden at the operationBinding level - see OperationBinding.useSpringTransaction

      Configuration

      When using Spring transactions, Smart GWT needs a way to lookup the JNDI connection being used by Spring, and this needs to be configured. First, register a bean like this in your applicationContext.xml file:
          <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
              <!-- Set this to the JNDI name Spring is using -->
              <property name="jndiName" value="isomorphic/jdbc/defaultDatabase"/>
          </bean>
        
      and then add a line like this to your server.properties:
          # Set this property to match the "id" of the JndiObjectFactoryBean registered in Spring
          sql.spring.jdbcDataSourceBean: dataSource
        

      Default value is null

      See Also:
    • transformResponseScript

      public String transformResponseScript
      A scriptlet to be executed on the server after running any operation on this dataSource. The intention is that this scriptlet transforms the response data in some way. It is provided to allow complex, programmatic transformations without having to handle the operation in a DMI or handler script. To avoid confusion, it should be noted that, in many cases, you could instead write response transformation logic in a script that first invokes execute() on the DSRequest to run the default behavior. The advantage of transformResponseScript is that it allows you partition any script logic appropriately, and is more self-documenting.

      If OperationBinding.transformResponseScript is also specified, it will be executed for the operation binding in question in addition to and after the DataSource-level script.

      All of the variables available in a normal script are available to a transformResponseScript, but the framework also provides somes additional variables useful in the specific context of transforming response data:

      • dsResponseThe actual DSResponse object
      • responseObjects is the response data as a list of objects, either Maps or Javabeans, depending on what data model you are using - see beanClassName. If the underlying response data is actually a single object, this is just that single object wrapped in a List
      • responseObject is the response data as a single object. If the underlying response data is not a single object - ie, it is a list of objects - this will just be the first item in the list
      Additionally, if the dataSource is a server-side RestConnector, a number of additional variables are made available:
      • responseText is the actual XML or JSON text returned by the REST service
      • completeData is the entire REST response converted into a structure of Java Lists and Maps by parsing the JSON or XML text (note, only if the requiresCompleteRESTResponse flag is true)
      • responseRecords is the REST response converted into a list of records, either by:
        • Parsing a list-type response from the JSON or XML text returned by the REST service, or
        • Parsing the response and then applying the recordXPath, if a recordXPath is declared
        responseRecords is provided to the script as a List of Maps. If the dataSource expresses a responseTemplate, it is the list of records after templating has been processed
      • responseRecordsBeforeTemplating is the responseRecords as they were before the responseTemplate was applied. It is only present if the dataSource declares a responseTemplate
      Normally, both responseObjects and (for RESTDataSource only) responseRecords are "live". This means that any changes you make to them in your script will affect the underlying objects in the response data and be seen in downstream processing. This means that your script can modify things in place and does not need to return a value; this is the recommended approach.

      If your script does return a value, we will assign that value as the response data on return from the script.

      Warning: It is advisable to explicitly return null if you do not wish to return a value. See the section "Returning values and JavaScript", which also applies to Groovy and some other JSR223 languages, in the server scripting overview.

      Note, if you prefer a Java solution rather than placing scripts in your .ds.xml files, you can instead extend the Java BasicDataSource class and override its transformResponse() method. If you both override the Java method and provide a transformResponseScript, the Java method runs first and any transformations it makes will be visible to the script. See serverConstructor for details of how to use a custom class to implement a DataSource server-side.

      Default value is null

      See Also:
    • transformRequestScript

      public String transformRequestScript
      A scriptlet to be executed on the server prior to running any operation on this dataSource. The intention is that this scriptlet transforms the DSRequest in some way - probably its criteria and/or values, but maybe something less obvious, such as adding to its templateContext Map - prior to running the operation proper. It is provided to allow more complex, programmatic transformations than is possible with declarative DSRequestModifiers, without having to handle the operation in a DMI or handler script. To avoid confusion, it should be noted that, in many cases, you could instead write request transformation logic in a script and then have that script invoke execute() on the DSRequest to proceed with default behavior. The advantage of transformRequestScript is that it allows you partition any script logic appropriately, and is more self-documenting.

      If OperationBinding.transformRequestScript is also specified, it will be executed for the operation binding in question in addtion to and after the DataSource-level script.

      All of the variables available in a normal script are available to a transformRequestScript, including the three you are most likely to use: dsRequest, criteria and values. All three of these variables are "live" - any changes you make to them will affect the underlying object and be seen in downstream processing. This means that your script can modify things in place and does not need to return a value; this is the recommended approach.

      If your script does return a value, we will assign that value to criteria for a fetch or remove operation, and to values for an add or update operation. This may well be what you want, but it may not - this is why we recommend modifying in place and not returning a value.

      Warning: It is advisable to explicitly return null if you do not wish to return a value. See the section "Returning values and JavaScript", which also applies to Groovy and some other JSR223 languages, in the server scripting overview.

      Default value is null

      See Also:
    • defaultBooleanStorageStrategy

      public String defaultBooleanStorageStrategy
      For serverType:"sql" DataSources, sets the default sqlStorageStrategy to use for boolean fields where no sqlStorageStrategy has been declared on the field.

      Can also be set system-wide via the Server_properties setting sql.defaultBooleanStorageStrategy, or for a particular database configuration by setting sql.dbName.defaultBooleanStorageStrategy (see Admin Console overview for more information on SQL configuration).

      Note that when this property is unset, the default DataSourceField.sqlStorageStrategy strategy is effectively "string".

      Default value is null

      See Also:
    • defaultMultiUpdatePolicy

      public MultiUpdatePolicy defaultMultiUpdatePolicy
      Controls when primary keys are required for "update" and "remove" server operations, when allowMultiUpdate has not been explicitly configured on either the operationBinding.allowMultiUpdate or via the server-side API DSRequest.setAllowMultiUpdate().

      Default value of null means this DataSource will use the system-wide default, which is set via datasources.defaultMultiUpdatePolicy in server.properties, and defaults to not allowing multi updates for requests associated with an RPCManager, see MultiUpdatePolicy for details.

      Default value is null

      See Also:
    • useParentFieldOrder

      public Boolean useParentFieldOrder
      For a DataSource that inherits fields from another DataSource (via inheritsFrom), indicates that the parent's field order should be used instead of the order of the fields as declared in this DataSource. New fields, if any, are placed at the end.

      Default value is null

    • idClassName

      public String idClassName
      For JPA and Hibernate DataSources this property indicates, that data source has composite primary key and specifies fully-qualified Java class:
      • with @EmbeddedId you have to specify class name of declared id
      • with @IdClass you have to specify class specified in annotation declaration

      Default value is null

    • auditChangedFieldsFieldName

      public String auditChangedFieldsFieldName
      For DataSources with auditing enabled, specifies the field name used to store the names of the fields which were updated. If empty string is specified as the field name, the audit DataSource will not store this field.

      Note that this field will only be set for update operations.

      Default value is "audit_changedFields"

    • defaultSortField

      public String defaultSortField
      For DataSources of serverType "sql" only, the name of a field to include in the ORDER BY clause as a means of enforcing a stable sort order, for paged fetches only. This property has no effect for non-SQL dataSources, or for non-paged fetches.

      Generally speaking, databases make no guarantee about the order of rows in a resultset unless an ORDER BY clause was provided. This is not usually a problem if you actually don't care about the row order, but there is a catch if you are using paged fetching: because the pages are fetched using completely different queries, the database is at liberty to use different orderings from one fetch to another, and in some cases, with some databases, that is what actually happens. This leads to broken paging behavior, with some records duplicated and others omitted.

      Note that it is unusual for a database to change strategies between queries like this; generally speaking, rows are ordered in some kind of natural ordering in the absence of an explicit order - typically insertion order, or by primary key value. However, it does happen, and is more likely with some database products than others.

      This property only has an effect if forceSort is in effect for the fetch operation. See below for DataSource- and operation-level options, but ordinarily this is arranged by setting the forceSort property for the current database configuration in your server.properties file:

          # Given this database definition
          sql.MyDB.database.type: mysql
       
          # Either of these settings will enable automatic sorting
          sql.MyDB.forceSort: true
          # Or
          sql.mysql.forceSort: true
        
      Note, the defaultSortField should ideally provide a unique ordering: so for an employee table, payroll number would be preferable to employee name. A non-unique ordering will usually be sufficient to ensure stablity of ordering from one query to the next, because it will usually ensure that the database is forced to use the same index in each case. However, the database may still choose to order rows differently within the confines of the non-unique ordering, so only a unique ordering is guaranteed to ensure stability.

      Fields of type creatorTimestamp are also good candidates for this purpose - assuming you have a suitable index in place, and assuming sorting by temporal values does not introduce performance problems with your database of choice - as they are often unique or near-to-unique, and they reflect the insertion order, which is the "natural ordering" in some (not all) databases.

      Note that this automatic sorting does not interfere with the ordinary sorting that your application may do: it is applied in addition to any application sort order. So if your application imposes no sort order, the resultset will be sorted by the defaultSortField; if your application requests a sort order of, eg, "state" then "city", the resultset will be ordered by "state", then "city", and then the defaultSortField

      If forceSort is enabled and you do not provide a defaultSortField for a given database, Smart GWT will instead use the primaryKey field(s). If the dataSource does not define a primaryKey, we will just use the dataSource's first defined field. Recommendation: Unless you have a reason to explicitly declare a defaultSortField, we recommend that you leave it undefined for any DataSource that declares a primaryKey (and we recommend that all dataSources declare a primaryKey). The primaryKey field is usually the ideal candidate for this purpose, because it is unique and almost certainly indexed.

      Note that forceSort is enabled by default for PostgreSQL, because this product is known to be less likely to retain a stable sort order between two similar, unordered queries.

      DataSource- and OperationBinding-level forceSort flag

      As well as the above-mentioned global settings, it is possible to configure automatic sorting behavior for paged fetches at both the individual DataSource level and operationBinding level. However, this facility is only provided to allow complete configurability in unusual cases. Generally speaking, if a database requires ordering for correct behavior with paged fetches, it always requires ordering for correct behavior with paged fetches; you can't ordinarily pick and choose which tables or which individual fetches need to be ordered.

      That said, there may be real world cases where a database that normally requires ordering for correct behavior, nevertheless has individual cases where that is not required - maybe on tables that have only a single index, or in cases where there are no joins involved, or maybe in other circumstances related to how that specific database product works internally. Or again, there may be unusual individual cases where a database that ordinarily works fine for paged fetches without requiring ordering, needs to apply an ordering.

      We provide the DataSource- and operation-level forceSort flags to allow you to work around or take advantage of these database-specific quirks. However, use of them should be considered a red flag because they might cause problems if things change in the future (new database release, change in the underlying query, addition of a foreignKey relation at the database level, etc).

      Default value is null

      See Also:
    • autoDeriveSchema

      public Boolean autoDeriveSchema
      This property allows you to specify that your DataSource's schema (field definitions) should be automatically derived from some kind of metadata. This causes Smart GWT to create a special super-DataSource, which is used purely as a source of default schema for this DataSource; this is arranged by causing the autoDerived DataSource to automatically inherit from the special super-DataSource. This allows you to auto-derive schema from existing metadata, whilst still being able to override any or all of it as required. By default additional derived field definitions are placed at the end, but that can be changed by useParentFieldOrder flag. Also, derived field definitions may be hidden using showLocalFieldsOnly.

      This property has a different implementation depending upon the serverType of the DataSource:

      • For a DataSource with serverType: "sql", automatically derive the dataSource's schema from the Spring bean or Java class specified in schemaBean. If schemaBean is not specified, derive the schema from the columns in the SQL table specified in tableName. More information on SQL DataSources is here
      • For a DataSource with serverType: "hibernate", automatically derive the dataSource's schema from the Spring bean, Hibernate mapping or Java class named in the schemaBean property. If no such thing exists, derive the schema from the Hibernate mapping or Java class specified in the beanClassName property (this allows you to specify schema and mapping separately, in the unusual circumstance that you have a need to do so). Note that the "mappings" referred to here can mean either .hbm.xml files or annotated classes; both are supported. If neither of these is successful, derive the schema from the underlying SQL table specified in tableName. More information on Hibernate DataSources is here
      • For a DataSource with serverType: "jpa1" or "jpa", automatically derive the dataSource's schema from the annotated JPA class named in the schemaBean property. If the schemaBean property is not defined, derive the schema from the annotated JPA class named in the beanClassName property (as with Hibernate, this allows you to specify schema and mapping separately if you need to do so). JPA DataSource generation relies on annotations (the orm.xml mapping file is not supported). More information on JPA DataSources is here
      • For other DataSource types, attempt to find a Spring bean with the name specified in the schemaBean property. If no such bean is found (or Spring is not present), attempt to instantiate an object whose fully-qualified class name is the value in the schemaBean property. If one of these approaches succeeds, we derive the schema from the discovered object (by treating it as a Java Bean and assuming that each one of its getters corresponds to a like-named field in the DataSource). More information on custom DataSource implementations is here.
      Note that when dataSource schema is derived by introspecting the Java class or Spring bean the derived fields are sorted alphabetically, but with primary key fields at the top.

      Field types

      The following table shows how SQL types are derived into DataSource types when we use an SQL table as a source of metadata for a SQL or Hibernate DataSource:
      SQL typeDataSource type
      CHAR, VARCHAR, LONGVARCHAR, TEXT, CLOBtext
      BIT, TINYINT, SMALLINT, INTEGER, BIGINT, DECIMAL*, NUMBER**integer
      REAL, FLOAT, DOUBLE, DECIMAL*, NUMBER**float
      DATEdate
      TIMEtime
      TIMESTAMPdatetime
      BLOB, BINARY, VARBINARY, LONGVARBINARYbinary
      *For DECIMAL types, we inspect the "DECIMAL_DIGITS" attribute of the JDBC metadata and designate the field type "integer" if that attribute is 0, or "float" if it is some other value.
      **NUMBER is an Oracle-only type that appears in the JDBC metadata as DECIMAL and is handled exactly the same way as DECIMAL

      The following table shows how Java types are derived into DataSource types when we use an unannotated Java class (Spring bean, Hibernate mapping or POJO) as a source of metadata for a SQL, Hibernate or custom DataSource:

      Java typeDataSource type
      boolean, Booleanboolean
      char, Character, String, Readertext
      byte, short, int, long, Byte, Short, Integer, Long, BigIntegerinteger
      float, double, Float, Double, BigDecimalfloat
      Enumenum (see discussion below)
      InputStreambinary
      java.sql.Date, java.util.Date, java.util.Calendardate
      java.sql.Timetime
      java.sql.Timestampdatetime

      Finally, this table shows how Java types are derived into DataSource types when we use an annotated class as a source of metadata. Note annotated classes are necessary for JPA DataSources, but you can choose to use them for other types of DataSource as well. For Hibernate DataSources, this is very worthwhile because Hibernate will also make use of the annotations as config, meaning you don't need to specify .hbm.xml files. For SQL and custom DataSources, there is no benefit at the persistence level, but it may still be worthwhile because the use of an annotated class gives us better metadata and allows us to generate a better, closer-fitting autoDerive DataSource than we can from examination of SQL schema or plain Java Beans:

      Java typeDataSource type
      boolean, Booleanboolean
      char, Character, String, Readertext
      byte, short, int, long, Byte, Short, Integer, Long, BigIntegerinteger
      float, double, Float, Double, BigDecimalfloat
      InputStreambinary
      java.util.Date (with Temporal set to DATE), java.sql.Datedate
      java.util.Date (with Temporal set to TIME), java.sql.Timetime
      java.util.Date (with Temporal set to TIMESTAMP), java.util.Calendar, java.sql.Timestampdatetime
      Enum (with Enumerated set to STRING)enum (see discussion below)
      Enum (with Enumerated set to ORDINAL)intEnum (see discussion below)
      Field with Lob annotationbinary
      Field with GeneratedValue annotationsequence, if the field is an integer type (see discussion below)

      Primary keys, sequences and identity columns

      We attempt to derive information about primary keys from the metadata we have.

      If the metadata source is an SQL table:

      • If the table does not have a native primary key constraint, no attempt is made to identify primary key fields. Otherwise:
      • The column or columns that make up the table's native primary key constraint are identified using the JDBC DatabaseMetaData.getPrimaryKeys() API
      • Each DataSource field that corresponds to one of these native primary key columns is marked primaryKey: true
      • For each of these columns, the metadata returned by DatabaseMetaData.getColumns() is inspected. If the metadata includes IS_AUTOINCREMENT=YES, we mark the corresponding field as type="sequence". This information should be reliably provided by databases that implement "auto-increment" or "identity" column types, such as MySQL or Microsoft SQL Server
      • If the previous step does not identify a column as a sequence, we inspect the ResultSetMetaData obtained by running a dummy query on the table. If the isAutoIncrement() API returns true for that column, we mark the corresponding field as type="sequence"
      • If the previous steps have not identified the column as a sequence, we check the TYPE_NAME in the column metadata. If it is "serial", this means the column is a PostgreSQL "serial" or "serial8" type column. Postgres does not transparently implement auto-increment columns, but it does provide this serial type, which causes the column to be implicitly bound to an underlying sequence. So this type causes us to mark the field type="sequence", and we also set implicitSequence true
      • If the previous steps have not identified the column as a sequence, we check the COLUMN_DEF in the column metadata. If this contains the token "$$ISEQ" and ends with "NEXTVAL", this means the column is an Oracle "GENERATED AS IDENTITY" column. This type of column was introduced in Oracle 12c and is conceptually exactly the same thing as the Postgres "serial" column described above. We treat it the same way: mark it type="sequence" and implicitSequence="true"
      • If the previous steps have not identified the column as a sequence, then you may be using a pure sequence database, such as an Oracle version earlier than 12c, or you may be using a database where both sequences and identity columns are available (Oracle, Postgres, DB2), and a sequence is being used either by design or because the table was created before the database product supported identity columns. In this case, we cannot determine if the column should be a sequence or not. However, in many applications, the fact that the column is an integer and is a primary key would imply that it is also a sequence. Therefore, if the column is an integer and the server.properties flag auto.derive.integer.pk.always.sequence is true, we mark the field as type="sequence"
      • If, after all this, Smart GWT ends up incorrectly marking a primary key field as a sequence (or vice versa), you can always override it on a per-field basis by simply redeclaring the field with the correct type in your .ds.xml file:
           <DataSource serverType="sql" tableName="myTable" autoDeriveSchema="true">
             <fields>
               <!-- This field was incorrectly marked as a sequence -->
               <field name="notASeq" type="integer" />
               <!-- This field was incorrectly marked as an integer when it should be a sequence -->
               <field name="isASeq" type="sequence" />
             </fields>
           </DataSource>

      If the metadata source is Hibernate mappings described in a .hbm.xml file:

      • The first field we encounter that is described in the mapping with an <id> tag is marked as a primaryKey
      • If that field is marked as being generated, we set its type to "sequence"

      If the metadata source is an annotated object (whether JPA, Hibernate or just an annotated POJO):

      • Any field with an @Id annotation is is marked as a primaryKey (this differs from the Hibernate .hbm.xml file case because that is specific to Hibernate, which does support composite keys, but not by specifying multiple <id> tags. Annotations are supported, via annotated POJOs, for any kind of persistence strategy, so multiple @Id fields are perfectly valid)
      • Any field with a @GeneratedValue annotation is either marked as type="sequence" (if it is an integer type) or as autoGenerated="true" (for other field types)
      Finally, if the metadata is a plain, unannotated Java object, no attempt is made to derive primary key fields.

      enums and valueMaps

      When we come across Java Enum properties in plain or annotated classes, as well as setting the field type as noted in the above tables, we also generate a valueMap for the field, based on the Enum members.

      For cases where we generate a field of Smart GWT type "enum" (see the above tables), the keys of the valueMap are the result of calling name() on each member of the underlying Java Enum (in other words, its value exactly as declared in its enum declaration). For cases where we generate a field of Smart GWT type "intEnum", the keys of the valueMap are strings representing the ordinal number of each member in the Java Enum - "0", "1", etc. Note that this behavior will be overriden by DataSource.enumTranslateStrategy if both are set.

      In both of these case, the display values generated for the valueMap are the result of calling toString() on each Enum member. If that gives the same value as calling name(), the value is passed through DataTools.deriveTitleFromName(), which applies the same processing rules as getAutoTitle() to derive a more user-friendly display value.

      Further notes

      schemaBean implies autoDeriveSchema, because it has no other purpose than to name the bean to use for auto-derived schema. Thus, if you specify schemaBean you do not need to specify autoDeriveSchema as well (though it does no harm to do so). However, tableName and beanClassName can be validly specified without implying autoDeriveSchema, so in those cases you must explicitly specify autoDeriveSchema.

      The underlying super-DataSource is cached in server memory, so that it does not have to be derived and created each time you need it. However, the cache manager will automatically refresh the cached copy if it detects that the deriving DataSource has changed. Thus, if you change the metadata your DataSource is deriving (if, for example, you add a column to a table), all you need to do is touch the .ds.xml file (ie, update its last changed timestamp - you don't actually have to change it) and the cached copy will be refreshed next time it is needed.

      When autoDeriveSchema is set, SQLDataSource will automatically discover foreignKeys and deliver table and column name information to the client in hashed form so that two DataSources that are linked by native SQL foreign keys will automatically discover each other if loaded into the same application, and set foreignKey automatically. Because the table and column names are delivered as cryptohashes, there is no information leakage, but regardless, the feature can be disabled via setting datasource.autoLinkFKs to false in server.properties. This hashed linkage information is delivered to the client in properties tableCode and DataSourceField.fkTableCode/fkColumnCode

      Default value is null

    • serverConfig

      public DataSource serverConfig
      Configuration settings that will be used purely on the the server-side. Any properties declared within a serverConfig block will be overlaid onto the regular dataSource configuration, such that any property declared in serverConfig takes priority, but only on the server; the client never sees the serverConfig block.

      serverConfig is primarily intended for use with the RestConnector, and is used for two related purposes. Firstly, it is a mechanism for declaring properties that only the server should see, to avoid information leakage to the client. Typical usages of this type would be things like headers and params - things that are specific to restConnector's communication with the remote REST server, and no business of the client's.

      Secondly, serverConfig is used in cases where you need the client and server to use different values for a property. The only property for which this definitely arises is dataURL (and the same property on the OperationBinding), because dataURL is used both on the client, to configure the endpoint for server requests, and by RestController on the server to configure the target URL(s) for REST request(s). Thus, you should always specify a dataURL inside serverConfig if it is intended for use on the server-side by a RestConnector DataSource; failure to do so will cause the client to send its DSRequests directly to the remote REST server, which obviously will not know how to handle them.

      Sub-objects will be merged such that properties declared in serverConfig replace the same properties declared in the regular sub-object, but properties that are missing in the serverConfig remain in the merged sub-object. For example:

        <DataSource ... >
       
          <operationBindings>
            <operationBinding operationType="fetch" dataURL="specialIDACallURL" outputs="some,list,of,fields" /> 
            <!-- the browser can see this, so this works for the extremely rare case that a DataSource should
                    contact different versions of the IDACall servlet for different operations.  The browser is
                     also aware of the limited outputs -->
            </operationBindings>
       
            <serverConfig>
              <operationBindings>
                <operationBinding operationType="fetch" dataURL="restServiceURL" /> 
                <!-- this is the actual URL of the REST service to contact from the server.  The browser never sees this
                      so its version of "dataURL" is retained.  At the same time, the "outputs" from the browser-visible 
                      definition still apply, because the configs were merged -->
              </operationBindings>
            <serverConfig>
        

      Default value is null

      See Also:
    • useSequences

      public Boolean useSequences
      For a DataSource with serverType:"sql", this flag indicates whether any fields of type "sequence" should be backed by a native database sequence (if the flag is true) or an auto-increment/identity column (if it is false). It is only applicable in cases where the database in use supports both approaches, and Smart GWT supports both strategies with that particular database. For most databases, even those that natively support either approach, Smart GWT uses one or the other, and this cannot be configured.

      Right now, the only supported database is Microsoft SQL Server. If you specify this flag on a non-SQL DataSource or if any database other than SQL Server is in use, the flag is simply ignored.

      If not set, this flag defaults to the server.properties setting sql.{dbName}.use.sequences, which in turn defaults to false.

      Default value is null

    • allowAdvancedCriteria

      public Boolean allowAdvancedCriteria
      By default, all DataSources are assumed to be capable of handling AdvancedCriteria on fetch or filter type operations. This property may be set to false to indicate that this dataSource does not support advancedCriteria. See supportsAdvancedCriteria() for further information on this.

      NOTE: If you specify this property in a DataSource descriptor (.ds.xml file), it is enforced on the server. This means that if you run a request containing AdvancedCriteria against a DataSource that advertises itself as allowAdvancedCriteria:false, it will be rejected.

      Default value is null

      See Also:
    • schema

      public String schema
      This property only applies to the built-in SQL DataSource provided in Pro and better editions of Smart GWT

      Defines the name of the schema we use to qualify the tableName in generated SQL. If you do not provide this property, we will try to use the Smart GWT default schema for this dbName; you specify this for a given dbName in your server.properties file like this:

           sql.the_dbName.default.schema: myDefaultSchema
        
      If there is no DataSource-specific schema and no Smart GWT default schema, table names will not be qualified in generated SQL, and thus the database default schema will be used. Support for multiple schemas (or schemata) varies quite significantly across the supported databases, as does the meaning of the phrase "default schema". In addition, some databases allow you to override the default schema in the JDBC connection URL, which is a preferable approach if all your tables are in the same (non-default) schema, because it makes the generated SQL simpler (no need to qualify every table)

      The following table provides information by product:

      Product Notes
      DB2 Arbitrarily named schemas are supported. The default schema is named after the connecting user, though this can be overridden by specifying the "currentSchema" property on the JDBC connection URL
      DB2 for iSeries Arbitrarily named schemas are supported. "Schema" is synonymous with "library". The default schema depends on the setting of the "naming" connection property. When this is set to "sql", behavior is similar to other DB2 editions: the default schema is named after the connecting user, unless overridden by specifying a library name in the JDBC connection URL. When "naming" is set to "system", the schema of an unqualified table is resolved using a traditional search of the library list; the library list can be provided in the "libraries" property
      Firebird Firebird does not support the concept of schema at all - all "schema objects" like tables and indexes belong directly to the database. In addition, Firebird actively rejects qualified table names in queries as syntax errors; therefore, you should not set the schema property for a DataSource that will be backed by a Firebird database
      HSQLDB Arbitrarily named schemas are supported. The default schema is auto-created when the database is created; by default it is called "PUBLIC", but can be renamed. It is not possible to set the default schema in the JDBC connection URL
      Informix Informix databases can be flagged as "ANSI mode" at creation time. ANSI-mode databases behave similarly to DB2 for schema support: arbitrarily named schemas are supported, and the default schema is the one named after the connected user. Non-ANSI databases have no real schema support at all. It is not possible to set the default schema in the JDBC connection URL with either type of database
      Microsoft SQL Server Prior to SQL Server 2005, schema support is similar to Oracle: "schema" is synonymous with "owner". As of SQL Server 2005, schema is supported as a separate concept, and a user's default schema can be configured (though it still defaults to a schema with the same name as the user). It is not possible to set the default schema in the JDBC connection URL
      MySQL MySQL does not have a separate concept of "schema"; it treats the terms "schema" and "database" interchangeably. In fact MySQL databases actually behave more like schemas, in that a connection to database X can refer to a table in database Y simply by qualifying the name in the query. Also, because schema and database are the same concept in MySQL, overriding the "default schema" is done implicitly when you specify which database to connect to in your JDBC connection URL
      Oracle Arbitrarily named schemas are not supported; in Oracle, "schema" is synonymous with "user", so each valid user in the database is associated implicitly with a schema of the same name, and there are no other schemas possible. It is possible to refer to tables in another user's schema (assuming you have the privileges to do so) by simply qualifying the table name. The default schema is always implied by the connecting user and cannot be overridden.
      Postgres Arbitrarily named schemas are supported. Rather than the concept of a "default schema", Postgres supports the idea of a search path of schemas, whereby unqualified table references cause a search of the list of schemas in order, and the first schema in the path is the "current" one for creation purposes. Unfortunately, there is no way to specify this search path on the JDBC connection URL, so the default schema comes from the user definition, ultimately defaulting to the default "public" schema

      Default value is null

    • nullIntegerValue

      public int nullIntegerValue
      If noNullUpdates is set, the value to use for any integer field that has a null value assigned on an update operation, and does not specify an explicit nullReplacementValue.

      Default value is 0

      See Also:
      • noNullUpdates
      • com.smartgwt.client.docs.serverds.DataSourceField#nullReplacementValue
    • xmlTag

      public String xmlTag
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.xmlTag).

      This property specifies the name of the enclosing tag when we are generating the XML block used to send context to a REST service where the requestFormat is "xml"

      Default value is null

      See Also:
    • audit

      public boolean audit
      Enables saving of a log of changes to this DataSource in a second DataSource with the same fields, called the "audit DataSource". NOTE: this feature applies to Enterprise Edition only; for more information on edition-specific features, see http://smartclient.com/product.

      When auditing is enabled, every time a DSRequest modifies this DataSource, a Record is added to the audit DataSource that shows the record as it existed after the change was made (or for a "remove", the values of the record at the time of deletion). In addition, the audit DataSource has the following additional metadata fields:

      If any of the field names above collide with field names of the DataSource being audited, an integer suffix will also be added, starting with 2 (for example, "audit_modifier2", then "audit_modifier3", etc).

      To omit a data field from the automatically generated audit DataSource, just set DataSourceField.audit to false. Audit can be disabled for a given DSRequest via the server-side API DSRequest.setSkipAudit(), or for a specific operation via the operationBinding.skipAudit setting.

      When viewing DataSources in the DataSource Navigator, a button will be shown at the top of a DataSource's section if it has an audit DataSource to let you conveniently open another section to view it.

      Note: The DataSource audit feature works only with single row operations; operations with allowMultiUpdate enabled are not supported.

      Auto-generated Audit DataSources

      The audit DataSource is normally automatically generated, and unless otherwise specified with auditDataSourceID, the ID of the audit DataSource will be audit_[OriginalDSID].

      By default, the automatically generated audit DataSource will be of the same type as the DataSource being audited, however, if the DataSource being audited is not already a SQLDataSource, we recommend using auditDSConstructor:"sql" to use a SQLDataSource as the audit DataSource. This is because a SQLDataSource used an audit DataSource will automatically generate a SQL table for storing audit data the first time changes are made. JPA would require manual creation of a Java Bean, and Hibernate requires hbm2ddl.auto=update to be set, which is widely considered unsafe for production use.

      Note that the automatically generated audit DataSource may have additional fields that can be populated when new audit data is added. See addedAuditFields for more details.

      Automatically created audit DataSources can be loaded and queried just like other DataSources, using the DataSourceLoader, and using the server-side API DataSource.getAuditDataSource(). However, you must load the DataSource being audited before loading its automatically created audit DataSource.

      Note, that automatic SQL tables creation can be disabled. See autoCreateAuditTable for details.

      Connecting custom "Users" DataSource to auto-generated Audit DataSources

      Audit DataSources utilize authenticated username (see the docs for RPCManager.getUserId() server-side API) to populate the "audit_modifier" field. To enable quick access to additional user information, you can associate them with a custom "Users" DataSource.

      This association can be configured via the framework setting "userDataSource.foreignKey" in the "server.properties" file, which establishes a relationship between authenticated user and your custom "Users" DataSource. Declaring this informs the system that the authenticated username is a valid ID for records within a specific "Users" DataSource. Use the regular DataSourceField.foreignKey format, where, in example below, "UserDS" represents the name of the related "Users" DataSource, and "userName" serves as the primary key of that DataSource:

        userDataSource.foreignKey: UserDS.userName
        
      This configuration enables you to access additional user information seamlessly through built-in framework features like DSRequest.additionalOutputs and ListGridField.includeFrom when reviewing audit data. For instance, if "UserDS" declares fields such as "firstName" and "lastName," you can retrieve them using additional listGridFields:
        {name: "userFirstName", includeFrom: "UserDS.firstName"},
        {name: "userLastName", includeFrom: "UserDS.lastName"}
        
      Alternatively, if you're fetching data manually, you can utilize additionalOutputs as follows:
        additionalOutputs: "userFirstName!UserDS.firstName,userLastName!UserDS.lastName"
        

      Manually created Audit DataSources

      The audit DataSource can also be manually created. In this case, you can can either follow the naming conventions described above for the ID of the audit DataSource and the names of metadata fields, or use the linked properties to assign custom names. If you omit any data fields from the tracked DataSource in your audit DataSource, those fields will be ignored for auditing purposes, exactly as though DataSourceField.audit had been set to false for an automatically-generated audit DataSource.

      Also, note that in case of manually defined audit DataSource, if this DataSource is defined so it inherits the audited DataSource, all the audited DataSource's fields will be inherited, this including the primary keys. Since for the audit DataSource the primary key should be the revision field, in order to prevent the audit DataSource having two primary keys, the inherited DataSource's primary key will have to be declared in audit DataSource, but with the primaryKey attribute omitted (as well not being of type "sequence") in the audit DataSource.

      Default value is false

      See Also:
    • resultTreeClass

      public Map resultTreeClass
      Class for ResultTrees used by this datasource. If null, defaults to using ResultTree.

      This can be set to a custom subclass of ResultTree that, for example, hangs on to extra information necessary for integration with web services.

      Default value is null

    • guestUserId

      public String guestUserId
      Value to use for the ownerIdField if no one has authenticated.

      This setting can be overridden at the operationBinding level.

      Default value is null

      See Also:
    • autoCreateAuditTable

      public boolean autoCreateAuditTable
      Setting autoCreateAuditTable to true indicates that audit DataSource will automatically create SQL table when auditing is enabled.

      Note, that autoCreateAuditTable attribute takes effect only if framework setting audit.autoCreateTables in server.properties is set to false, which by default is set to true.

      Default value is true

    • headers

      public Map headers
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.headers).

      Some REST servers require special HTTP headers to be sent to provide state or authentication details; this property can be used to set up one or more headers to be sent with the REST request. Like other elements of RestConnector config, you typically define this property inside a serverConfig block to keep the details from leaking to clients. Also like other RestConnector config, you can include Velocity templates in your header definitions.

      Headers are simple key/value pairs, defined like this:

           <headers>
               <My-Custom-Header>Some custom value</My-Custom-Header>
               <Another-Special-Header>$config['special.header.value']</Another-Special-Header>
           </headers>
      Note, we automatically add a "Content-Type" header of either application/json or application/xml (depending on the responseFormat), unless you define one by hand in your DataSource's headers definition

      Default value is null

      See Also:
    • serverOnly

      public String serverOnly
      Setting a DataSource to be serverOnly="true" ensures that it will not be visible to the client. Any request through IDACall to this DataSource will return a failure response. Only requests which have been initiated on the server-side will be executed against this DataSource.

      Default value is null

    • multiInsertStrategy

      public MultiInsertStrategy multiInsertStrategy
      For dataSources of serverType "sql" only, this property sets the multi-insert strategy for add requests on this dataSource. Only has an effect if the add request specifies a list of records as the data. See the docs for MultiInsertStrategy for more information.

      Note that this setting (along with the other multi-insert properties, multiInsertBatchSize and multiInsertNonMatchingStrategy) can be overridden at the operationBinding level and the dsRequest level. If you wish to configure multi-insert setting globally, you can add the following properties to your server.properties file:

      sql.multi.insert.strategy Global equivalent of multiInsertStrategy
      sql.multi.insert.batchSize Global equivalent of multiInsertBatchSize
      sql.multi.insert.non.matching.strategy Global equivalent of multiInsertNonMatchingStrategy

      Example server.properties settings:

          sql.multi.insert.strategy: multipleValues
          sql.multi.insert.batchSize: 300
          sql.multi.insert.non.matching.strategy: dropMissing
        
      See the "Batch inserting" paragraph of the addData() documentation for a description of the multi-insert implementation.

      Default value is "simple"

      See Also:
    • description

      public String description
      An optional description of the DataSource's content. Not automatically exposed on any component, but useful for developer documentation, and as such is included on any OpenAPI specification generated by the framework. Markdown is a commonly used syntax, but you may also embed HTML content in a CDATA tag.

      Default value is null

    • requiresRole

      public String requiresRole
      Similar to OperationBinding.requiresRole, but controls access to the DataSource as a whole.

      Default value is null

      See Also:
    • compareMetadataForAuditChangeStatus

      public boolean compareMetadataForAuditChangeStatus
      Only applicable to binary fields on audited DataSources.

      When determining if a binary field has changed for auditing purposes, should we compare the metadata values (ie, the associated _filename and _filesize fields) or the actual binary content? If you set this flag to false, this will cause Smart GWT to fetch the existing content of any binary field from the database before any update, and then compare it byte by byte to the new content. You should consider if this will have performance implications for your application, particularly if you store large binary values.

      Note that value comparison of any kind is only performed if the field's DataSourceField.audit setting is "change", but also note that this is the default setting for binary fields

      Default value is true

    • quoteColumnNames

      public Boolean quoteColumnNames
      If set, tells the SQL engine to quote column names in all generated DML and DDL statements for this dataSource. This will ensure that queries generated against tables that do not follow the database product's natural column-naming conventions will still work.

      In general we recommend that you allow the database to use its natural naming scheme when creating tables (put more simply, just do not quote column names in the CREATE TABLE statement); if you do this, you will not need to worry about quoting column names when querying. However, if you are dealing with pre-existing tables, or do not have control over the database naming conventions used, this property may become necessary. This property may also be necessary if you are using field/column names that clash with reserved words in the underlying database (these vary by database, but a field called "date" or "timestamp" would have problems with most database products)

      Note: Only applicable to dataSources of serverType "sql".

      Default value is null

    • fileFormatField

      public String fileFormatField
      The native field name used by this DataSource on the server to represent the fileFormat for FileSource Operations.

      If the fileFormatField is not configured, then a field named "fileFormat" will be used, if it exists. Otherwise, the DataSource will not track fileFormats -- this may be acceptable if, for instance, the fileFormat is always the same.

      The fileFormat is specified according to the extension that would have been used in the filesystem -- for instance, the fileFormat for employees.ds.xml would be "xml".

      Default value is null

      See Also:
    • trimMilliseconds

      public Boolean trimMilliseconds
      For this dataSource, should the millisecond portion of time and datetime values be trimmed off before before being sent from client to server or vice versa. By default, millisecond information is preserved (ie, it is not trimmed). You only need to consider trimming millisecond values if their presence causes a problem - for example, a custom server that is not expecting that extra information and so fails parsing.

      Note that there is no inherent support for millisecond precision in Smart GWT widgets; if you need millisecond-precise visibility and editability of values in your client, you must write custom formatters and editors (or sponsor the addition of such things to the framework). Server-side, millisecond-precise values are delivered to and obtained from DataSources, so DataSource implementations that are capable of persisting and reading millisecond values should work transparently. Of the built-in DataSource types, the JPA and Hibernate DataSources will transparently handle millisecond-precise values as long as the underlying database supports millisecond precision, and the underlying column is of an appropriate type. The SQLDataSource provides accuracy to the nearest second by default; you can switch on millisecond precision per-field with the storeMilliseconds attribute.

      Default value is null

    • requestProperties

      public DSRequest requestProperties
      Additional properties to pass through to the DSRequests made by this DataSource. This must be set before any DSRequests are issued and before any component is bound to the DataSource.

      These properties are applied before transformRequest() is called.

      Default value is null

      See Also:
    • auditTypeFieldName

      public String auditTypeFieldName
      For DataSources with auditing enabled, specifies the field name used to store the operationType (in a field of type "text"). If empty string is specified as the field name, the audit DataSource will not store this field.

      Default value is "audit_operationType"

    • tableCode

      public String tableCode
      Only applicable to the built-in SQL DataSource

      tableCode and the related properties DataSourceField.columnCode, DataSourceField.fkTableCode and DataSourceField.fkColumnCode are read-only attributes that are secure and unique cryptographic hashes of table and column names used by this DataSource.

      These properties are used automatically by client-side framework code to link dataSources together by foreign key when a foreignKey is not explicitly declared, but is found in the SQL schema via the autoDeriveSchema feature.

      A secure hash is used rather than the actual SQL table or column name for security reasons - sending the actual SQL table or column name to the client could aid in attempted SQL injection attacks.

      This feature can be disabled system-wide via setting datasource.autoLinkFKs to false in server.properties.

      Default value is null

    • arrayCriteriaForceExact

      public Boolean arrayCriteriaForceExact
      With ordinary simple criteria, it is possible to provide an array of values for a given field, which means "any of these values". For example:
         var criteria = {
           field1 : "value1",
           field2 : ["value2", "value3"]
        }
      At first glance, this criteria appears to mean "select records where 'field1' is 'value1' and 'field2' is one of 'value2' or 'value3'". However, if the prevailing textMatchStyle is "substring" - as it would be if, for example, you had issued a filterData() call - then this criteria means "select records where 'field1' contains 'value1' somewhere, and 'field2' contains either 'value2' or 'value3'"

      Depending on your requirement, this may or may not be what you want, and you can control it by setting the textMatchStyle on your DSRequest, or by setting a default textMatchStyle on the DataSource (defaultTextMatchStyle), or by specifying that the textMatchStyle should be ignored for certain fields (DataSourceField.ignoreTextMatchStyle)

      However, a common use case is that you want "substring" style filtering on one or more single-valued fields, but exact matching on a list-valued field. For example, say you want a list of customers based in certain cities, with a name matching the substring "software":

         var criteria = {
           name : "software",
           city : ["York", "Newport", "Orleans"]
        }
      Here, using substring matching on the "city" field is likely to give incorrect results, as it will match a number of US cities in addition to the three European cities intended (New York, Yorktown, New Orleans, Newport News, and probably others). And even if this is not an issue for your particular use case, and correct results can be obtained with a substring search, it is very likely to be more costly performance-wise than the exact value match that you really need (potentially much more costly).

      You could achieve exact matching in the above case by marking the "city" field as ignoreTextMatchStyle:true, but that would mean you couldn't perform substring searches on that field in other use cases where that might be desirable.

      In cases like this, Smart GWT by default treats list-valued simple criteria as if ignoreTextMatchStyle is in force for that field. If you want to switch this behavior off, and have list-valued simple criteria handled with the prevailing textMatchStyle, set arrayCriteriaForceExact to false. It is also possible to set this flag per-operationBinding and per-DSRequest - see OperationBinding.arrayCriteriaForceExact and DSRequest.arrayCriteriaForceExact

      If you want to switch this behavior off across the entire system, set the flag in your server.properties file:

            arrayCriteriaForceExact: false
        
      This will only have an effect for server-side DataSources; if you want to change this flag globally for clientOnly dataSources, change the default in the client-side DataSource class, like so:
            isc.DataSource.addProperties({arrayCriteriaForceExact: false});
        
      If you do change the global defaults, it is still possible to override per-dataSource or per-operationBinding, as described above.

      Note, all of this only applies to simple criteria. If your dataSource supports AdvancedCriteria, that gives you much finer and more complete control over the exact meaning of individual criterions.

      Default value is null

      See Also:
    • auditChangedFieldsFieldLength

      public Integer auditChangedFieldsFieldLength
      For DataSources with auditing enabled, specifies the length of the field used to store the names of the fields which were updated. See also auditChangedFieldsFieldName

      To set the changedFields field length for all DataSources that do not override the default, set audit.auditChangedFieldsFieldLength in your server.properties file. For example

               audit.auditChangedFieldsFieldLength: 512
        

      Default value is 255

    • auth

      public RESTAuthentication auth
      For a RestConnector DataSource, authentication details for the REST service. Note, although RestConnector-specific details would typically be wrapped in a serverConfig block to prevent information leakage to the client, we automatically exclude the auth block even if it is not inside serverConfig, because this is information that is always sensitive, and never required on the client

      Default value is null

      See Also:
    • inheritanceMode

      public DSInheritanceMode inheritanceMode
      For dataSources of serverType "sql" and "hibernate", specifies the inheritance mode to use. This property has no effect for any other type of DataSource.

      Default value is "full"

      See Also:
    • nullFloatValue

      public float nullFloatValue
      If noNullUpdates is set, the value to use for any float field that has a null value assigned on an update operation, and does not specify an explicit nullReplacementValue.

      Default value is 0.0

      See Also:
      • noNullUpdates
      • com.smartgwt.client.docs.serverds.DataSourceField#nullReplacementValue
    • schemaNamespace

      public String schemaNamespace
      For a DataSource derived from WSDL or XML schema, the XML namespace this schema belongs to. This is a read-only attribute automatically present on DataSources returned from SchemaSet.getSchema() and WebService.getSchema().

      Default value is null

      See Also:
    • qualifyColumnNames

      public Boolean qualifyColumnNames
      For dataSources of serverType "sql", determines whether we qualify column names with table names in any SQL we generate. This property can be overridden on specific operationBindings.

      Default value is true

      See Also:
    • auditRevisionFieldName

      public String auditRevisionFieldName
      For DataSources with auditing enabled, specifies the field name used to store the revision number for the change (in a field of type "sequence"). If empty string is specified as the field name, the audit DataSource will not store this field.

      Default value is "audit_revision"

    • dataURL

      public String dataURL
      Default URL to contact to fulfill all DSRequests. Can also be set on a per-operationType basis via OperationBinding.dataURL.

      NOTE: Best practice is to use the same dataURL for all DataSources which fulfill DSRequests via the server-side RPCManager API. Otherwise, cross-DataSource operation queuing will not be possible.

      dataURL and RestConnector

      dataURL is also used to configure the address of the target REST server for a RestConnector DataSource. Typically this is done at the operationBinding level, because the URLs of REST services often encode things about the service itself, like whether it is a fetch or an update. However, if your REST service does use the same URL for every service, or most services, you can configure it here at the DataSource level.

      As discussed in the RestConnector overview, many elements of RestConnector config can be Velocity templates, and dataURL is one such element. For example:

          <operationBinding operationType="fetch" operationId="fetchByPK">
              <dataURL>$config['rest.server.base.url']/fetch/$criteria.pk</dataURL>
          </operationBinding>
      NOTE: Because the server-side RestConnector implementation uses the dataURL property to configure the address of the target REST server, and that property also has meaning on the client, if you are using a RestConnector DataSource, you should specify the dataURL of the target REST server inside the serverConfig block

      Default value is null

      See Also:
    • jsonPrefix

      public String jsonPrefix
      Allows you to specify an arbitrary prefix string to apply to all json format responses sent from the server to this application.

      The inclusion of such a prefix ensures your code is not directly executable outside of your application, as a preventative measure against javascript hijacking.

      Only applies to responses formatted as json objects. Does not apply to responses returned via scriptInclude type transport.
      Note: If the prefix / suffix served by your backend is not a constant, you can use dataFormat:"custom" instead and explicitly parse the prefix out as part of transformResponse().

      Default value is null

      See Also:
    • relatedTableAlias

      public String relatedTableAlias
      For a SQL DataSource that is referred by additional foreign keys, this property defines the table alias name to use in generated SQL. If omitted DataSource ID will be used to construct the alias.

      Aliasing is necessary when the same table appears more than once in a query. In addition to use cases described in DataSourceField.relatedTableAlias, this may happen when includeFrom field using foreign key defined in otherFKs would be included multiple times in a related DataSource.

      See the "Automatically generated table aliases" section of the SQL Templating for the complete set of general rules how aliases are generated. Also, see dataSourceField.otherFKs for more details and usage samples.

      Default value is null

      See Also:
    • projectFileLocations

      public String[] projectFileLocations
      For DataSources with type projectFile, specifies locations for the project files. In XML, each location is expressed with a <location> tag, e.g.:
            <projectFileLocations>
                <location>[WEBROOT]/shared/ds</location>
                <location>ds://datasources</location>
            </projectFileLocations>
        
      Directories should be specified as absolute paths on the server. If you want to construct a webroot-relative path, then prefix the path with [WEBROOT] (unlike in server.properties, where you would use $webRoot as the prefix).

      To specify another DataSource to be used via fileSource operations, use ds://dsName (where "dsName" is the name of the other DataSource).

      A projectFile DataSource uses the standard fileSource field names: fileName, fileType, fileFormat, fileContents, fileSize and fileLastModified. When defining a projectFile DataSource, you can use inheritsFrom with a value of "ProjectFile" to inherit definitions for these fields -- e.g.:

            <DataSource ID="MyDataSources" type="projectFile" inheritsFrom="ProjectFile">
                <projectFileLocations>
                    <location>[WEBROOT]/shared/ds</location>
                    <location>ds://datasources</location>
                </projectFileLocations>
            </DataSource> 
        

      For directory locations, the fileName is relative to the directory specified. Note that the fileName does not include any extension for type or format. For instance, for "employees.ds.xml", the fileName would be "employees", the fileType would be "ds" and the fileFormat would be "xml".

      A projectFile DataSource executes the various fileSource operations in the following manner. The general rule is that fileName, fileType, and fileFormat are treated as primary keys. If files with the same combination of those attributes exist in more than one of the configured locations, the locations are considered in reverse order, with priority given to the location listed last. When modifying an existing file, the last location which contains the file will be used. When creating a new file, the file will be created in the last configured location.

      listFiles
      Returns a combined list of files from all configured locations. Note that listFiles does not recurse into subdirectories. If the same combination of fileName / fileType / fileFormat exists in more than one configured location, then the data for fileSize and fileLastModified will be taken from the last configured location which contains the file.
      hasFile
      Indicates whether the file exists in any of the configured locations.
      getFile
      Returns file data by searching the locations in reverse order.
      saveFile
      If the file exists, it will be saved in the last location in which it exists. If it is a new file, it will be saved in the last configured location.
      renameFile
      The file will be renamed in the last location in which it exists. Note that if the file exists in more than one location, the rename will not affect other locations. Thus, a subsequent listFiles operation will return the file from the other location (as well as the renamed file).
      removeFile
      The file will be removed from the last location in which it exists. Note that if the file exists in more than one location, the removal will not affect other locations. Thus, a subsequent listFiles operation will return the file from the other location.
      For convenience, a projectFile DataSource also responds to the standard DataSource operations, in the following manner:
      add
      Executes a saveFile operation, either adding the file or updating an existing file.
      fetch
      Executes a listFiles operation. Note that the results will not include the fileContents. In order to obtain the fileContents, you must use a getFile operation.
      update
      Executes a renameFile operation. Note that this will not update the fileContents -- for that, you need to use "add", or a saveFile operation.
      remove
      Executes a removeFile operation.

      If you specify both projectFileKey and projectFileLocations, then both with be used, with the projectFileLocations applied last.

      Default value is null

      See Also:
    • allowCriteriaSubqueries

      public Boolean allowCriteriaSubqueries
      Controls whether AdvancedCriteria subqueries are permitted for this DataSource. Set this value true to always permit subqueries in criteria on this dataSource, and set it false to always disallow them.

      The default value of null means that we use the global setting, controlled by setting the allowCriteriaSubqueries flag in your server.properties file (this global flag is true by default))

      Default value is null

      See Also:
    • ownerIdField

      public String ownerIdField
      Requires that the currently authenticated user match the contents of this field, for client-initiated requests (i.e., where DSRequest.isClientRequest() returns true on the server).

      Note, the behaviors described below can be affected by the dataSource properties ownerIdNullRole and ownerIdNullAccess, so please read the documentation for those two properties in conjunction with this article.

      When a new row is added by a client-initiated DSRequest, the ownerIdField will be automatically populated with the currently authenticated user (clobbering any value supplied by the client). Client-initiated attempts to update the ownerIdField will also be prevented.

      If you wish to set the ownerIdField to a different value via an "add" or "update" operation, you can do so in server-side DMI code (possibly consulting DSRequest.getClientSuppliedValues() to get the value that was clobbered).

      For client-initiated "fetch", "update" or "remove" operations, the server will modify client-supplied criteria so that only rows whose ownerIdField matches the currently authenticated user can be read, updated or deleted. For built-in DataSource types (SQL, Hibernate and JPA), the additional criteria required to match the ownerIdField field will ignore the prevailing textMatchStyle and always use exact equality. If you have a custom or generic DataSource implementation, you will want to do the same thing, to avoid false positives on partial matches (eg, a user with name "a" gets records where the owner is any user with an "a" in the name). You can determine when this is necessary by looking for a DSRequest attribute called "restrictedOwnerIdField". For example, code similar to the following:

        String restrictedField = (String)dsRequest.getAttribute("restrictedOwnerIdField");
        if (field.getName() != null && field.getName().equals(restrictedField)) {
         // Use exact matching for this field
        } else {
         OK to use the textMatchStyle
        }
        
      Also note, for server-initiated requests, this automatic criteria-narrowing is not applied; if your application requires server-initiated "fetch", "update" and "remove" requests to be limited to the currently-authenticated user, your code must add the necessary criteria to the request.

      The ownerIdField setting can be overridden at the OperationBinding.ownerIdField level.

      If ownerIdField is specified, requiresAuthentication will default to true. If requiresAuthentication is explicitly set to false, then unauthenticated users will be able to see all records. To avoid this, you can use guestUserId to specify a default user to apply when no one has authenticated.

      Default value is null

      See Also:
    • lookAhead

      public int lookAhead
      If we are loading progressively, indicates the number of extra records Smart GWT Server will read beyond the end record requested by the client, in order to establish if there are more records to view. This property has no effect if we are not progressive-loading.

      This property can be tweaked in conjunction with endGap to change behavior at the end of a dataset. For example, with the default values of lookAhead: 1 and endGap: 20, we can end up with the viewport shrinking if we get a case where there really was only one more record (because the client was initially told there were 20 more). This is not a problem per se, but it may be surprising to the user. You could prevent this happening (at the cost of larger reads) by setting lookAhead to be endGap+1.

      Default value is 1

      See Also:
    • childrenField

      public String childrenField
      fieldName for a field in the dataSource expected to contain an explicit array of child nodes. Enables loading a databound tree as a hierarchical data structure, rather than a flat list of nodes linked by foreignKey.
      Note this is an alternative to setting DataSourceField.childrenProperty directly on the childrenField object.

      By default the children field will be assumed to be multiple, for XML databinding. This implies that child data should be delivered in the format:

             <childrenFieldName>
                 <item name="firstChild" ...>
                 <item name="secondChild" ...>
             </childrenFieldName>
        
      However data may also be delivered as a direct list of childrenFieldName elements:
             <childrenFieldName name="firstChild" ...>
             <childrenFieldName name="secondChild" ...>
        
      If you want to return your data in this format, you will need to explicitly set multiple to false in the appropriate dataSource field definition.

      Default value is null

      See Also:
    • httpMethod

      public String httpMethod
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.httpMethod).

      Allows you to configure the HTTP method to use for this DataSource. Since REST services typically use different HTTP methods for different types of operation - it is common to use GET for fetches and PUT for updates, for example - this property is more commonly set at the operationBinding level than at the DataSource level. See this Mozilla article for a list of valid HTTP methods.

      If this property is not provided at either DataSource or opBinding level, RestConnector uses a default HTTP method depending on the type of operation:

      Operation typeHTTP Method
      FetchGET
      AddPOST
      UpdatePUT
      RemoveDELETE

      Default value is null

      See Also:
    • inheritsFrom

      public String inheritsFrom
      ID of another DataSource this DataSource inherits its fields from.

      Local fields (fields defined in this DataSource) are added to inherited fields to form the full set of fields. Fields with the same name are merged in the same way that databound component fields are merged with DataSource fields.

      The default order of the combined fields is new local fields first (including any fields present in the parent DataSource which the local DataSource re-declares), then parent fields. You can set useParentFieldOrder to instead use the parent's field order, with new local fields appearing last. You can set showLocalFieldsOnly to have all non-local fields hidden.

      Note that only fields are inherited - other properties such as dataURL and dataFormat are not. You can use ordinary inheritance, that is, creating a subclass of DataSource, in order to share properties such as dataURL across a series of DataSources that also inherit fields from each other via inheritsFrom.

      This feature can be used for:

      • creating a customized view (eg, only certain fields shown) which will be used by multiple databound components.
      • adding presentation-specific attributes to metadata that has been automatically derived from XML Schema or other metadata formats
      • modeling object subclassing and extension in server-side code and storage systems
      • modeling relational database joins, and the equivalents in other systems
      • creating hooks for others to customize your application in a maintainable way. For example, if you have a dataSource "employee", you can create a dataSource "customizedEmployee" which inherits from "employee" but does not initially define any fields, and bind all databound components to "customizedEmployee". Customizations of fields (including appearance changes, field order, new fields, hiding of fields, and custom validation rules) can be added to "customizedEmployee", so that they are kept separately from the original field data and have the best possible chance of working with future versions of the "employee" dataSource.

      Default value is null

    • wrapInList

      public Boolean wrapInList
      Applies to RestConnector dataSources (serverType "rest") only, and is overridable per operationBinding (see OperationBinding.wrapInList).

      When true, and the requestFormat is "json", and the dsRequest only contains a single valueSet, we will wrap that single valueSet in a list before sending to the remote REST server. This is useful if you have a remote service that wants data to be supplied as a list, even when there is only one entry in the list

      Default value is false

      See Also:
    • auditedDataSourceID

      public String auditedDataSourceID
      For audit DataSources, this required property specifies the ID of the audited DataSource. Automatically populated for auto-generated audit DataSources.

      Default value is varies

    • nullBooleanValue

      public boolean nullBooleanValue
      If noNullUpdates is set, the value to use for any boolean field that has a null value assigned on an update operation, and does not specify an explicit nullReplacementValue.

      Default value is false

      See Also:
      • noNullUpdates
      • com.smartgwt.client.docs.serverds.DataSourceField#nullReplacementValue
    • nullDateValue

      public Date nullDateValue
      If noNullUpdates is set, the value to use for any date or time field that has a null value assigned on an update operation, and does not specify an explicit nullReplacementValue.

      Unlike strings and numbers, there is no "natural" choice for a null replacement value for dates. The default value we have chosen is midnight on January 1st 1970, simply because this is "the epoch" - the value that is returned by calling "new Date(0)"

      Default value is See below

      See Also:
      • noNullUpdates
      • com.smartgwt.client.docs.serverds.DataSourceField#nullReplacementValue
    • dataField

      public String dataField
      Name of the field that has the most pertinent numeric, date, or enum value, for use when a DataBoundComponent needs to show a short summary of a record.

      For example, for a DataSource of employees, good choices might be the "salary" field, "hire date" or "status" (active, vacation, on leave, etc).

      Unlike titleField, dataField is not automatically determined in the absence of an explicit setting.

      Default value is null

      See Also:
  • Constructor Details

    • DataSource

      public DataSource()