public interface CacheSynchronization
dataBoundComponent
s or
programmatically on
dataSource
s directly, are reflected in the
client-side caches of other
dataBoundComponent
s. Automatic cache sync means that components update
themselves to reflect data changes - so if you update a record in a
form
, and the same information is
currently visible in a
grid
, the grid data will change to reflect
the updates automatically.
The Smart GWT server is designed to retrieve cache sync data in the most optimal way, taking into account the capabilities of the target data store or data service and the context of the request, such as whether it came from a browser or was programmatically initiated on the server. The cacheSyncStrategy setting can be used to manually control how cache sync is performed server-side, whenever that's necessary.
CacheSyncStrategy
type, and to a
lesser extent the CacheSyncTiming
. These settings allow you
to define a
default cache sync approach globally, overridable
per-DataSource
,
per-Operation
or
per-request
.
See the CacheSyncStrategy documentation
for
details.
Note that DataSource-level, operation-level and request-level cacheSyncStrategy
settings are honored in all cases (though see below for a caveat on that), but global
settings are defaults only. Smart GWT will override the default if it detects that it
might lead to problems in a given case.
For example, if a DataSource declares a field that specifies the
autoGenerated
flag, it
is saying that field is not a
sequence but its value is nevertheless generated by the database or ORM platform, so we
would not expect a value for that field in the request values. So if the global default
cacheSyncStrategy
is "requestValuesPlusSequences", that is going to lead to
Smart GWT returning an incomplete cache sync record, which might well look to your
users like a broken application.
Smart GWT will detect these cases where we would have possibly incomplete cache sync data,
and automatically change the strategy to "refetch". This is done when we have a field
that is part of the expected outputs (see DSRequest.outputs
and
OperationBinding.outputs
)
and specifies any of the following properties (because
they all mean that the field's value is server-generated in some way):
cacheSyncTiming
setting according
to the requirements of a given request. For example, if the global default is "lazy" and
there is no DataSource-level, operation-level or request-level timing setting, Smart GWT
will override the timing to "immediate" for any client-originated request, amongst other
things. See the CacheSyncTiming
documentation
for further
details.
If you do not want these intelligent fallback behaviors for a given DataSource, operation
or request, set a cacheSyncStrategy
or cacheSyncTiming
on the
DataSource, operation or request. This will always be honored for
cacheSyncStrategy
, even when the system knows that it is going to lead to
incomplete cache-sync data. However, even an explicit cacheSyncTiming
setting
at the DataSource, operation or request level will be ignored in certain circumstances,
where deferring the cache sync operation could break framework functionality. An example
of this is auditing: if your dataSource is configured for
automatic auditing
, the framework
categorically does need
the cache sync data in order to write an audit record, and when deferred cache sync is in
force, we have no guarantee that cache sync will run at all. So we ignore attempts to
configure it for deferred cache sync.
CacheSyncStrategy
applies to various
built-in dataSource types
With JDBC 3.0+ drivers and a sequenceMode
of jdbcDriver
,
we can retrieve generated sequence values without requiring an explicit, separate SQL query
to retrieve the generated keys. With the default CacheSyncStrategy of
requestValuesPlusSequences
, SQLDataSource uses this JDBC approach to avoid a
separate SQL query where possible - see the "requestValuesPlusSequences" section of the
CacheSyncStrategy
documentation for details. With a
sequenceMode
of native
, we will issue a separate SQL query to retrieve the generated keys,
even if you are using "requestvaluesPlusSequences". However, it is not a full refetch, it
is just a special native query, specific to the database in use, to retrieve just the
generated sequence values, so it is still likely to have a performance advantage over using
"refetch" - in the refetch case, we perform an additional full fetch, using the
retrieved keys as the criteria.
Note that sequences (or identity columns, or auto-increment columns - different databases
use different mechanisms and terminology for the same concept) are not the only kind of
database-generated value. As well as the three DataSource field settings, mentioned above,
that can explicitly denote a generated value -
autoGenerated
,
customSQL
and
customSelectExpression
- database-generated
values can come from database-declared default values, UDFs and stored procedures, and
triggers. In addition, application code can modify data returned from the database in
arbitrary ways, via logic in a DMI
,
server script
or
custom DataSource
implementation
. So there are cases
where "refetch" is necessary, and for some applications it may be that most or even all
cases require it (if you make extensive use of triggers, or enhance many database responses
in Java code, for example).
java.lang.Integer
in ordinary fetch ResultSets, but as instances of
java.lang.Long
from the getGeneratedKeys()
API.
This is a problem if you have existing Java code that is expecting an instance of
Integer
; if you switch to the more efficient "requestValuesPlusSequences"
cacheSyncStrategy, your code will be sent an instance of Long
instead. Note,
this is only a problem for existing server-side Java code; client code is not affected
because the Smart GWT framework already coerces all integer-type values to type
Long
for serialization to the client.
The Smart GWT framework offers a number of configurable mitigations for this problem:
server.properties
file, per database product or per
dbName
(the latter is more specific
and wins if there is a
conflict). For example:# Change all Postgres db connections to use small keys sql.postgresql.defaultKeySize: small # Change the db with the specific dbName "SalesDB" to use large keys sql.SalesDB.defaultKeySize: large
java.lang.Integer
for "small" sequences, java.lang.Long
for
"large" sequences), you can declare entries in server.properties
to override
them, per database product or per dbName, like this:# For Postgres, large key values should be returned as instances of BigInteger sql.postgresql.javaTypeForLargeKey: java.math.BigInteger # and small key values should be returned as instances of Short sql.postgresql.javaTypeForSmallKey: java.lang.Short # But small key values should be returned as type BigDecimal for AdminDB sql.AdminDB.javaTypeForSmallKey: java.math.BigDecimalThis is useful for new projects that may later switch database vendors, or for projects that use multiple database products in tandem, because it can be used to enforce cross-database type consistency. We would recommend using
java.lang.Integer
and java.lang.Long
for small and large sequences, respectively, since these
cover exactly the same range of valid values as the corresponding SQL types, INTEGER and
BIGINTserver.properties
flag sql.transformGeneratedKeys
, on by
default, causes the framework to transform the object returned by the getGeneratedKeys()
API in accordance with the above. So, if "SalesDB" and "AdminDB" are both PostgreSQL
databases, the settings above would mean:dbName="SalesDB"
would return an
instance of java.math.BigInteger
for the sequence fielddbName="AdminDB"
would return an
instance of java.math.BigDecimal
for the sequence field (because of the
custom javaTypeForSmallSequence
declared for that dbName)Admin Console
to import SQL
DataSources. Additionally,
we use these rules to decide on a column type for any integer field that is marked as
primaryKey
or foreignKey
,
to ensure that the same SQL type is used at both ends of the relation (because some
databases require this)sql.transformGeneratedKeys
is explicitly set false, no transformation
takes place; your code is directly returned whatever object type the JDBC dirver returnedserver.properties
flag "sql.transformGeneratedKeysToFetchType" should be
considered. This flag is off by default; when switched on, it causes the framework to cast
the generated value of a sequence field, as returned by the JDBC driver or native query, to
the same Java type that a regular fetch of that field would have returned. This is
accomplished by running a basic fetch of a single row the first time the information is
required, determining the underlying SQL type from the ResultSetMetadata, and then caching
that information for future use. The performance impact of this is minor, since it
involves a single additional fetch per DataSource instance for the lifetime of the JVM.
Note, this flag has no effect if sql.transformGeneratedKeys
is falsesql.transformGeneratedKeysToFetchType
is only intended for existing
projects that have a lot of server-side code that interacts with the Smart GWT Server and
may be affected by cacheSyncStrategy "requestValuesPlusSequences" returning a different
Java object type for sequence fields than your existing code is expecting. For any new
project we would recommend using the other options outlined above to either match your
database's return type for fetches, or to force consistency to types you specifyjavaClass
property.
Alternatively, Smart GWT will
automatically coerce values to match declared types if you are using Javabeans rather than
Maps as your data model - see beanClassName
. Both of these
approaches take precedence over the mechanisms described aboveHibernateDataSource
and
JPADataSource
only support
refetch
. These two
implementations integrate with the underlying ORM system at the level of the ORM's API,
allowing it to handle the details of database interaction. With these two DataSource types,
we are simply working with "persistent objects" - how the ORM manages things like changes
made by the database during update queries, or sequence values in add operations, is the
ORM's business.
For this reason, HibernateDataSource
and JPADataSource
install a
special CacheSyncStrategy
implementation under the refetch
name, that just does nothing, leaving the response data returned by the update operation
unchanged.
RestConnector
supports all four of the
default cache sync
strategies. Note that refetch
involves a second round-trip to the REST
service, so may be a performance concern.
requestValuesPlusSequences
attempts to extract the values for any missing
primaryKey fields
from the
response sent by the REST
service to the add or update request, so it is obviously only of use if the REST service
returns such values.
responseValues
, the default strategy for RestConnector
, just
uses the response data sent by the REST service to the update or add request. Again, this
is only usable if the REST service returns such data, but if it does, this strategy is
ideal.
custom dataSource
implementations.
These custom
DataSources will participate in cache sync like any other:cacheSyncStrategy
on the DataSource
,
operationBinding
or dsRequest
responseValues
", because
that was the prevailing behavior for custom DataSources before cacheSyncStrategy
was introducedrefetch
" (ie, you override the default in your
server.properties
, or set the strategy explicitly on your DataSource or operation
binding), you must implement a fetch operation, and if your dataSource has fields of type
"sequence", your fetch mechanism must be able to resolve the values of such fieldsrefetch
" with a custom dataSource is done lazily, and it is not
done at all if nothing asks for the response data. This is because the integration with
cache sync happens when the server-side DSResponse
's getData()
method is called. This happens automatically and will work perfectly for most use cases.
If, however, you have some unusual requirement which means you need "refetch
"
to cause an immediate cache sync fetch like it does with the built-in dataSources, you can
do what they do: invoke the CacheSyncStrategy
manually from your execution
flow, like this:CacheSyncStrategy strategy = dsRequest.getCacheSyncStrategy(); if (strategy.shouldRunCacheSync(dsRequest)) { // Apply the cache sync data to the dsResponse, first fetching it if necessary strategy.applyCacheSyncStrategy(dsRequest, dsResponse) }
canSyncCache
, cacheSyncOperation
and
useForCacheSync
operationBinding
flags interact with the
above-documented behavior of CacheSyncStrategy
as follows:canSyncCache
is
false, no cache sync logic will
run at alluseForCacheSync
operation is in force, or
the update operation specifies a cacheSyncOperation
,
that operation will be run if we are refetching the updated record - ie, if the
cacheSyncStrategy
is refetch
. Depending on the
cacheSyncTiming
in
force, the operation execution
may be deferred, and may not run at all. If we are not refetching the updated record,
these two flags have no effectserver.properties
flag
"default.multi.update.cache.sync.strategy
" to "sync
""; this will
cause the system to use the same cacheSyncStrategy it would use for a regular single-record
request on that DataSource. Note that in this case, the default strategy can be
auto-overridden by the framework just like a normal single-update strategy.
You can also just set a specific cacheSyncStrategy
on the DataSource, operation
or DSRequest, just like with a regular single-record request, and again, these specific
settings are not auto-overridden except in cases where they could potentially cause feature
breakage, as described above.
CacheSyncStrategy
,
OperationBinding.canSyncCache
,
OperationBinding.useForCacheSync
,
OperationBinding.cacheSyncOperation
,
DSResponse.getInvalidateCache()
,
OperationBinding.useForCacheSync
,
OperationBinding.cacheSyncOperation
,
OperationBinding.canSyncCache
,
DataSource.cacheSyncStrategy
,
OperationBinding.cacheSyncStrategy
,
com.smartgwt.client.data.DSRequest#getCacheSyncStrategy
,
DataSource.cacheSyncTiming
,
OperationBinding.cacheSyncTiming
,
com.smartgwt.client.data.DSRequest#getCacheSyncTiming