Enterprise evaluation FAQ
-
- Data flows
- Deployment
-
Security
-
Enterprise Readiness
- Logging and monitoring (using standard AWS tools)
- Backup and restore
- Disaster recovery (HA/DR)
- Infrastructure as code (ability to deploy using standard DevOps tools)
-
Interoperability
- Calling web services from client and server with proper auth and integrate that data with Datagrok API
- Interacting with Datagrok
- Connecting to common data sources
- relational databases
- local files
- datastore files
- Embedding a Datagrok visualization into a custom web application
- Embedding a custom visualization into Datagrok
-
Developer experience
-
Scalability and Performance
- Maximum dataset sizes
- Stability under concurrent user load
- Scaling and stability under load
-
Extensibility
- Creating custom visualizations
- Creating custom server-side components
- Creating custom scripts and utilizing them in other components
- Ability to reskin Datagrok to appear as a fit-for-purpose web application
- Ability to build custom application including data entry, workflow, data model, state management, persistence, etc
-
Frontend
- Capacity of holding data in client with differently sized data sets related to project scenarios, sizes detailed in
data sources section below
- High-throughput screening
- Virtual screening (minimal requirement)
- Large-chemical spaces
- Ability to cross-connect multiple large data tables
- Visualization of data sets in different plots
- High-throughput screening
- Virtual screening
- 2D | 3D structure rendering and interaction:
- Ability to customize structure rendering, and interact with it
- Atom-based annotations
- 2D-3D connectivity
- Interactivity
- Live data masking
- Filter by selection
- Different input methods (2D drawing etc.)
- API calls
- Ability to plugin non-native Datagrok pieces (e.g. react containers) and interact with Datagrok frontend (eg react containers)
- Scalability of frontend scripting functionality
- Capacity of holding data in client with differently sized data sets related to project scenarios, sizes detailed in
data sources section below
Encryption at rest
For AWS deployment, we rely on Amazon's built-in encryption for RDS and S3 buckets.
Encryption in transit
All client-server communications use the HTTPS protocol, which means it is secure and encrypted.
Server API
Datagrok client-side uses HTTP rest API to interact with server-side. Authentication token must be passed to access all features. Proof of concept video
Logging and monitoring
For AWS deployment, we rely on Amazon CloudWatch service which provides a convenient way to collect and observe metrics and logs.
Backup and restore
Amazon has scheduled backup for RDS and S3, but you can backup and restore RDS database as usual Postgres database.
Disaster recovery
Datagrok supports Docker installation, Amazon cluster will immediately restart failed instance
Infrastructure as a Code
Datagrok Docker containers are built using Jenkins, all software are upgraded and patched on every build.
Docker-compose manifest is used to describe and deploy Datagrok applications.
Also, there are multiple advanced options to deploy application:
- CloudFormation template to deploy to AWS ECS
- Terraform scripts to deploy to AWS ECS