This commit is contained in:
AcuGIS 2025-12-08 08:50:28 +02:00
parent 57ce9d4d75
commit ffebda29e5
57 changed files with 6577 additions and 0 deletions

17
.readthedocs.yaml Normal file
View File

@ -0,0 +1,17 @@
version: "2"
build:
os: "ubuntu-22.04"
tools:
python: "3.10"
python:
install:
- requirements: docs/requirements.txt
sphinx:
configuration: docs/conf.py
formats:
- epub
- pdf

241
docs/accordion.md Normal file
View File

@ -0,0 +1,241 @@
# Accordion Stories
Create narrative content with expandable sections that can include maps, datasets, and rich text.
## Overview
Accordion Stories provide a storytelling interface with expandable sections. Each section can contain rich HTML content, embedded maps, and linked datasets, making them ideal for creating interactive narratives and guided explorations.
## Creating Accordion Stories
### Access
- Navigate to Accordion Stories (`accordion_stories.php`)
- Publishers and admins can create stories
- Users can view stories they have access to
### Story Structure
Stories consist of:
- **Title**: Story title
- **Description**: Story description
- **Sections**: Multiple expandable sections
- **Category**: Optional categorization
- **Permissions**: Access control settings
## Sections
### Section Content
Each section can contain:
- **Title**: Section heading
- **HTML Content**: Rich text with formatting
- **Dataset Link**: Optional dataset association
- **Maps**: Embedded map displays
- **Media**: Images and other media
### Section Features
- **Expandable**: Click to expand/collapse
- **Rich Text**: HTML formatting support
- **Dataset Integration**: Link to datasets
- **Map Integration**: Display maps in sections
- **Ordering**: Custom section order
## Building Stories
### Step 1: Create Story
1. Navigate to Accordion Stories
2. Click "New Story"
3. Enter title and description
4. Select category (optional)
5. Set permissions
### Step 2: Add Sections
1. Open story builder
2. Add new section
3. Enter section title
4. Add HTML content
5. Optionally link dataset
6. Save section
### Step 3: Configure Content
1. Use rich text editor for content
2. Format text with HTML
3. Add images and media
4. Link to datasets
5. Embed maps if needed
### Step 4: Arrange Sections
1. Reorder sections by dragging
2. Set default expanded section
3. Organize narrative flow
4. Save story
## Content Editor
### Rich Text Editing
- **Formatting**: Bold, italic, underline
- **Headings**: Multiple heading levels
- **Lists**: Ordered and unordered lists
- **Links**: Hyperlinks to external content
- **Images**: Image embedding
- **Tables**: Table creation
### HTML Support
- Direct HTML editing
- Custom styling
- Embedded content
- Script integration (if allowed)
## Dataset Integration
### Linking Datasets
- **Dataset Selection**: Choose dataset to link
- **Automatic Display**: Map automatically displays linked dataset
- **Synchronization**: Section expansion shows linked data
- **Filtering**: Apply filters to linked datasets
### Map Display
When a section has a linked dataset:
- Map automatically displays dataset
- Section expansion shows map
- Map updates with section changes
- Interactive map features available
## Viewing Stories
### Story Viewer
- **Left Panel**: Accordion sections
- **Right Panel**: Map display (if dataset linked)
- **Navigation**: Expand/collapse sections
- **Responsive**: Mobile-friendly layout
### Interaction
- **Expand Sections**: Click to expand
- **Collapse Sections**: Click to collapse
- **Map Interaction**: Pan, zoom, query
- **Content Scrolling**: Scroll through content
## Use Cases
### Guided Tours
- Location-based tours
- Historical narratives
- Educational content
- Story maps
### Data Narratives
- Data exploration guides
- Analysis walkthroughs
- Results presentations
- Research stories
### Documentation
- Feature documentation
- User guides
- Tutorial content
- Help systems
### Presentations
- Interactive presentations
- Conference materials
- Public engagement
- Stakeholder reports
## Permissions
### Access Control
- **Public**: Accessible to all users
- **Private**: Creator only
- **Group-based**: Shared with user groups
- **Custom**: Specific user permissions
### Editing Permissions
- **Creator**: Can edit their stories
- **Admins**: Can edit all stories
- **Publishers**: Can create and edit stories
- **Users**: Can view accessible stories
## Story Management
### Editing
- Edit story metadata
- Modify sections
- Update content
- Reorder sections
- Change permissions
### Sharing
- Share via URL
- Set public/private status
- Configure group access
- Embed in other pages
### Organization
- Categorize stories
- Tag stories
- Search stories
- Filter by category
## Example Story
A typical accordion story might include:
1. **Introduction Section**: Overview and context
2. **Data Section**: Dataset description with linked map
3. **Analysis Section**: Analysis results and findings
4. **Conclusion Section**: Summary and next steps
Each section expands to reveal content and updates the map display.
## Best Practices
### Content
- Write clear, engaging content
- Use appropriate section titles
- Organize content logically
- Include visual elements
### Maps
- Link relevant datasets
- Use appropriate basemaps
- Configure map styling
- Test map interactions
### Navigation
- Set logical section order
- Use descriptive titles
- Provide clear navigation
- Test user experience
## Related Documentation
- [Dashboard](dashboard.md)
- [Web Apps](web-apps.md)
- [Dataset Viewer](../ui/dataset-viewer.md)
- [Map Viewer](../ui/map-viewer.md)

View File

@ -0,0 +1,51 @@
# Buffer Analysis
Create buffer zones around features at specified distances.
## Overview
Buffer analysis creates zones around features at specified distances, useful for proximity analysis and impact assessment.
## Inputs
- **Dataset**: Any vector dataset
- **Distance**: Buffer distance in dataset units
- **End Cap Style**: Round or flat
- **Dissolve**: Merge overlapping buffers
## Outputs
New polygon dataset containing:
- Buffer zones around input features
- Original feature attributes
- Buffer distance information
## Example
```json
{
"dataset_id": 123,
"distance": 1000,
"dissolve": false
}
```
## Use Cases
- Service area analysis
- Impact zone identification
- Proximity analysis
- Safety zone mapping
## Notes
- Distance units match dataset SRID
- Dissolving reduces output complexity
- Multiple distances can be applied
- Buffers around lines create polygons
## Related Documentation
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,56 @@
# Clip Analysis
Extract features from a dataset that intersect with a clipping boundary.
## Overview
Clip analysis extracts features that intersect with a clipping boundary, creating a subset of the input dataset.
## Inputs
- **Dataset**: Dataset to clip
- **Clip Geometry**: GeoJSON geometry for clipping boundary
## Outputs
New dataset containing:
- Features that intersect the boundary
- Geometry clipped to boundary
- Original attributes preserved
## Example
```json
{
"dataset_id": 123,
"clip_geometry": {
"type": "Polygon",
"coordinates": [ ... ]
}
}
```
## Background Jobs
This analysis runs as a background job. See [Clip Worker](../workers/clip.md) for details.
## Use Cases
- Study area extraction
- Boundary-based filtering
- Area of interest analysis
- Data subset creation
## Notes
- Only intersecting features included
- Geometry clipped to boundary
- Attributes preserved
- Processing time depends on dataset size
## Related Documentation
- [Clip Worker](../workers/clip.md)
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,69 @@
# Clustering
Group features based on spatial proximity using clustering algorithms.
## Overview
Clustering groups nearby features into clusters, identifying spatial patterns and groupings in point data.
## Inputs
- **Dataset**: Point dataset
- **Method**: Clustering algorithm
- **Parameters**: Algorithm-specific parameters
## Outputs
New dataset containing:
- Original features
- **Cluster ID**: Assigned cluster identifier
- **Cluster Size**: Number of features in cluster
- Original attributes
## Algorithms
### K-Means Clustering
Groups points into k clusters by minimizing within-cluster variance.
### DBSCAN
Density-based clustering that identifies clusters of varying shapes.
### Hierarchical Clustering
Builds cluster hierarchy using distance measures.
## Example
```json
{
"dataset_id": 123,
"method": "kmeans",
"k": 5
}
```
## Background Jobs
This analysis runs as a background job.
## Use Cases
- Market segmentation
- Service area identification
- Pattern recognition
- Data exploration
## Notes
- Algorithm selection depends on data characteristics
- Parameter tuning affects results
- Results may vary with different random seeds
- Consider spatial scale when interpreting clusters
## Related Documentation
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,65 @@
# Dissolve Analysis
Merge features based on attribute values, optionally aggregating numeric fields.
## Overview
Dissolve analysis merges adjacent or overlapping features that share the same attribute value, creating simplified datasets.
## Inputs
- **Source Dataset**: Dataset to dissolve
- **Dissolve Field**: Field to dissolve on (or merge all)
- **Aggregation Fields** (optional): Fields to aggregate with functions
## Outputs
New dataset containing:
- Merged geometries for each group
- Aggregated attribute values
- Group identifiers
## Aggregation Functions
- **Sum**: Sum of numeric values
- **Average**: Average of numeric values
- **Min/Max**: Minimum/maximum values
- **Count**: Count of features
## Example
```json
{
"source_dataset_id": 123,
"dissolve_field": "category",
"aggregation_fields": {
"population": "sum",
"area": "sum"
}
}
```
## Background Jobs
This analysis runs as a background job. See [Dissolve Worker](../workers/dissolve.md) for details.
## Use Cases
- Administrative boundary simplification
- Aggregated statistics
- Data generalization
- Map simplification
## Notes
- Adjacent features are merged
- Aggregation required for numeric fields
- Complex geometries may slow processing
- Results depend on dissolve field values
## Related Documentation
- [Dissolve Worker](../workers/dissolve.md)
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,53 @@
# Erase Analysis
Remove portions of features from an input dataset that overlap with an erase dataset.
## Overview
Erase analysis removes portions of input features that overlap with erase features, creating new geometries.
## Inputs
- **Input Dataset**: Dataset to erase from
- **Erase Dataset**: Dataset to erase with
## Outputs
New dataset containing:
- Features with erased portions removed
- Remaining geometry after erase
- Original attributes preserved
## Example
```json
{
"input_dataset_id": 123,
"erase_dataset_id": 124
}
```
## Background Jobs
This analysis runs as a background job. See [Erase Analysis Worker](../workers/erase_analysis.md) for details.
## Use Cases
- Exclusion zone mapping
- Feature subtraction
- Boundary modification
- Area calculations
## Notes
- Overlapping portions are removed
- Non-overlapping features preserved
- Complex geometries may slow processing
- Results depend on overlap extent
## Related Documentation
- [Erase Analysis Worker](../workers/erase_analysis.md)
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,78 @@
# Hot Spot Analysis
Identify statistically significant clusters of high and low values using Getis-Ord Gi* statistics.
## Overview
Hot spot analysis uses the Getis-Ord Gi* statistic to identify statistically significant spatial clusters. Features are classified as:
- **99% Hot Spot**: Very high values, 99% confidence
- **95% Hot Spot**: High values, 95% confidence
- **90% Hot Spot**: High values, 90% confidence
- **Not Significant**: No significant clustering
- **90% Cold Spot**: Low values, 90% confidence
- **95% Cold Spot**: Low values, 95% confidence
- **99% Cold Spot**: Very low values, 99% confidence
## Inputs
- **Dataset**: Point or polygon dataset
- **Value Field**: Numeric field to analyze
- **Neighbor Type**: Distance-based or K-nearest neighbors
- **Distance** (if distance-based): Maximum neighbor distance
- **K Neighbors** (if KNN): Number of nearest neighbors
## Outputs
New dataset containing:
- Original geometry
- **Gi* Z-Score**: Standardized z-score
- **P-Value**: Statistical significance
- **Hot Spot Class**: Categorized class
- Original attributes
## Algorithm
1. Calculate spatial weights matrix based on neighbor configuration
2. Compute Getis-Ord Gi* statistic for each feature
3. Calculate z-scores and p-values
4. Categorize into hot spot classes
5. Store results in output dataset
## Example
```json
{
"dataset_id": 123,
"value_field": "population",
"neighbor_type": "distance",
"distance": 1000
}
```
## Background Jobs
This analysis runs as a background job. See [Hot Spot Analysis Worker](../workers/hotspot_analysis.md) for details.
## Use Cases
- Crime analysis
- Disease clustering
- Economic activity patterns
- Environmental monitoring
- Social phenomena analysis
## Notes
- Requires numeric field with sufficient variation
- Distance should be appropriate for data scale
- KNN method is generally faster for large datasets
- Results depend on neighbor configuration
## Related Documentation
- [Hot Spot Analysis Worker](../workers/hotspot_analysis.md)
- [Analysis API](../api/analysis.md)
- [Live Hot Spot Analysis](hotspot-live.md)

View File

@ -0,0 +1,77 @@
# Analysis Tools
Aurora GIS provides a comprehensive suite of spatial analysis tools for vector and raster data.
## Overview
Analysis tools are organized into categories:
- **Proximity Analysis**: Buffer, nearest neighbor, distance calculations
- **Overlay Operations**: Intersect, union, erase, join operations
- **Spatial Analysis**: Hot spots, outliers, KDE, clustering
- **Raster Analysis**: Zonal statistics, raster operations
- **Feature Analysis**: Summarization and aggregation
## Vector Analysis Tools
```{toctree}
:maxdepth: 2
hotspot
outliers
kde
clustering
buffer
nearest
intersect
join
dissolve
erase
clip
summarize
```
## Raster Analysis Tools
```{toctree}
:maxdepth: 2
zonal-stats
raster-histogram
raster-summary
raster-profile
raster-conversion
raster-comparison
```
## Running Analysis
Analysis tools can be run via:
1. **Web Interface**: Use the analysis panels in the map viewer
2. **API**: Make POST requests to analysis endpoints
3. **Batch Tools**: Use the batch analysis interface
## Background Processing
Most analysis operations run as background jobs:
1. Analysis request creates a job in the queue
2. Worker processes the job asynchronously
3. Results are stored in a new dataset
4. User is notified when complete
## Output Formats
Analysis results can be stored as:
- **Static Table**: Permanent table with results
- **View**: Database view that updates with source data
- **Materialized View**: Materialized view requiring refresh
## Related Documentation
- [Workers](../workers/index.md)
- [API Documentation](../api/index.md)
- [Architecture Overview](../architecture.md)

View File

@ -0,0 +1,49 @@
# Intersect Analysis
Find features that intersect between two datasets.
## Overview
Intersect analysis identifies features that spatially overlap between two datasets, creating new features at intersection locations.
## Inputs
- **Input Dataset**: Primary dataset
- **Intersect Dataset**: Dataset to intersect with
- **Output Type**: Intersection geometry type
## Outputs
New dataset containing:
- Intersecting features
- Attributes from both datasets
- Intersection geometry
## Example
```json
{
"input_dataset_id": 123,
"intersect_dataset_id": 124
}
```
## Use Cases
- Overlay analysis
- Spatial filtering
- Feature extraction
- Area calculations
## Notes
- Output geometry type depends on input types
- Attributes from both datasets are included
- Empty intersections are excluded
- Processing time depends on dataset sizes
## Related Documentation
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,61 @@
# Join Features
Attach attributes from a target dataset to a source dataset based on spatial relationships.
## Overview
Join features attaches attributes from a target dataset to features in a source dataset based on spatial relationships (intersect, within, contains, etc.).
## Inputs
- **Source Dataset**: Dataset to join to
- **Target Dataset**: Dataset to join from
- **Spatial Relationship**: Intersect, within, contains, etc.
- **Aggregation** (optional): Aggregation functions for multiple matches
## Outputs
New dataset containing:
- Source feature geometry
- Attributes from both datasets
- Aggregated values (if aggregation specified)
## Aggregation Functions
- **Sum**: Sum of numeric values
- **Average**: Average of numeric values
- **Count**: Count of matching features
- **Min/Max**: Minimum/maximum values
## Example
```json
{
"source_dataset_id": 123,
"target_dataset_id": 124,
"relationship": "intersect",
"aggregation": {
"population": "sum"
}
}
```
## Use Cases
- Attribute enrichment
- Spatial data integration
- Aggregated statistics
- Data combination
## Notes
- Multiple target features can match one source feature
- Aggregation required for multiple matches
- Spatial relationship affects results
- Processing time depends on dataset sizes
## Related Documentation
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,63 @@
# Kernel Density Estimation (KDE)
Generate density surfaces from point data using kernel density estimation.
## Overview
KDE creates a continuous density surface from point data, showing where points are concentrated. Higher values indicate greater point density.
## Inputs
- **Dataset**: Point dataset
- **Bandwidth**: Smoothing parameter (default: auto-calculated)
- **Cell Size**: Output raster cell size (default: auto-calculated)
- **Weight Field** (optional): Field to weight points
## Outputs
Raster dataset containing:
- Density values for each cell
- Higher values indicate greater point density
- Proper spatial reference
## Algorithm
1. Calculate optimal bandwidth (if not specified)
2. Create output raster grid
3. For each cell, calculate kernel-weighted sum of nearby points
4. Store density values in raster
## Example
```json
{
"dataset_id": 123,
"bandwidth": 1000,
"cell_size": 100
}
```
## Background Jobs
This analysis runs as a background job.
## Use Cases
- Population density mapping
- Crime hotspot visualization
- Species distribution modeling
- Event density analysis
## Notes
- Bandwidth controls smoothing (larger = smoother)
- Cell size controls output resolution
- Weight field allows importance weighting
- Results are sensitive to bandwidth selection
## Related Documentation
- [Analysis API](../api/analysis.md)
- [Raster Tools](raster.md)

View File

@ -0,0 +1,58 @@
# Nearest Neighbor Analysis
Find nearest features from a target dataset for each feature in a source dataset.
## Overview
Nearest neighbor analysis identifies the closest features between two datasets, calculating distances and joining attributes.
## Inputs
- **Source Dataset**: Dataset to find nearest neighbors for
- **Target Dataset**: Dataset to search for neighbors
- **Max Distance**: Maximum search distance (optional)
- **Limit**: Maximum neighbors per feature (default: 1)
## Outputs
New dataset containing:
- Source feature geometry
- Nearest target feature information
- **Distance**: Distance to nearest neighbor
- Attributes from both datasets
## Example
```json
{
"source_dataset_id": 123,
"target_dataset_id": 124,
"max_distance": 5000,
"limit": 1
}
```
## Background Jobs
This analysis runs as a background job. See [Nearest Analysis Worker](../workers/nearest_analysis.md) for details.
## Use Cases
- Service accessibility analysis
- Nearest facility identification
- Distance calculations
- Spatial joins
## Notes
- Spatial indexes critical for performance
- Large max_distance values may slow processing
- Multiple neighbors can be found per feature
- Results include distance in dataset units
## Related Documentation
- [Nearest Analysis Worker](../workers/nearest_analysis.md)
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,74 @@
# Outlier Detection
Identify statistical outliers in numeric fields using z-score or MAD methods.
## Overview
Outlier detection identifies features with values that are statistically unusual compared to the dataset distribution.
## Methods
### Z-Score Method
Uses mean and standard deviation:
- Z-score = (value - mean) / standard_deviation
- Features with |z-score| > threshold are outliers
- Sensitive to outliers in calculation
### MAD Method
Uses median and median absolute deviation:
- Modified z-score = 0.6745 * (value - median) / MAD
- Features with |modified z-score| > threshold are outliers
- More robust to outliers in calculation
## Inputs
- **Dataset**: Any dataset with numeric field
- **Value Field**: Numeric field to analyze
- **Method**: "zscore" or "mad" (default: "zscore")
- **Threshold**: Z-score threshold or MAD multiplier (default: 2.0)
## Outputs
New dataset containing:
- Original features
- **Outlier Score**: Z-score or MAD score
- **Is Outlier**: Boolean flag
- Original attributes
## Example
```json
{
"dataset_id": 123,
"value_field": "income",
"method": "zscore",
"threshold": 2.0
}
```
## Background Jobs
This analysis runs as a background job. See [Outlier Analysis Worker](../workers/outlier_analysis.md) for details.
## Use Cases
- Data quality assessment
- Anomaly detection
- Error identification
- Extreme value analysis
## Notes
- Null values are excluded from calculations
- Threshold of 2.0 identifies ~5% of data as outliers (normal distribution)
- MAD method recommended for skewed distributions
- Consider spatial context when interpreting results
## Related Documentation
- [Outlier Analysis Worker](../workers/outlier_analysis.md)
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,57 @@
# Raster Comparison
Compare two raster datasets to identify differences.
## Overview
Raster comparison analyzes differences between two raster datasets, calculating change statistics and generating difference rasters.
## Inputs
- **Raster Dataset 1**: First raster dataset
- **Raster Dataset 2**: Second raster dataset
- **Comparison Method**: Difference calculation method
## Outputs
Comparison results containing:
- Difference raster
- Change statistics
- Summary report
## Comparison Methods
- **Difference**: Pixel value differences
- **Percent Change**: Percentage change
- **Ratio**: Ratio of values
## Example
```json
{
"raster_dataset_id_1": 125,
"raster_dataset_id_2": 126,
"method": "difference"
}
```
## Use Cases
- Change detection
- Time series analysis
- Quality comparison
- Validation
## Notes
- Rasters must have compatible extents
- Resampling may be required
- Results depend on comparison method
- Processing time depends on raster sizes
## Related Documentation
- [Analysis API](../api/analysis.md)
- [Raster Tools](raster.md)

View File

@ -0,0 +1,52 @@
# Raster Conversion
Convert raster datasets between formats and data types.
## Overview
Raster conversion transforms raster datasets between different formats, data types, and projections.
## Inputs
- **Raster Dataset**: Raster dataset to convert
- **Output Format**: Target format
- **Data Type**: Target data type
- **Projection** (optional): Target projection
## Outputs
New raster dataset with:
- Converted format
- Specified data type
- Projected coordinates (if specified)
## Example
```json
{
"raster_dataset_id": 125,
"output_format": "GeoTIFF",
"data_type": "Float32"
}
```
## Use Cases
- Format conversion
- Data type optimization
- Projection transformation
- Compatibility preparation
## Notes
- Data type conversion may affect values
- Projection transformation requires resampling
- Large rasters may take time to process
- Output format affects file size
## Related Documentation
- [Analysis API](../api/analysis.md)
- [Raster Tools](raster.md)

View File

@ -0,0 +1,51 @@
# Raster Histogram
Analyze pixel value distributions in raster datasets.
## Overview
Raster histogram analysis generates histograms showing the distribution of pixel values in raster datasets.
## Inputs
- **Raster Dataset**: Raster dataset to analyze
- **Band** (optional): Band to analyze (default: first band)
- **Bins** (optional): Number of histogram bins
## Outputs
Histogram data containing:
- Value ranges for each bin
- Pixel counts for each bin
- Statistics (min, max, mean, median)
## Example
```json
{
"raster_dataset_id": 125,
"band": 1,
"bins": 256
}
```
## Use Cases
- Data distribution analysis
- Quality assessment
- Value range identification
- Visualization preparation
## Notes
- NoData values excluded
- Results depend on raster data type
- Bins affect histogram resolution
- Large rasters may take time to process
## Related Documentation
- [Analysis API](../api/analysis.md)
- [Raster Tools](raster.md)

View File

@ -0,0 +1,53 @@
# Raster Profile
Extract pixel values along a line or path.
## Overview
Raster profile extracts pixel values from a raster dataset along a specified line or path.
## Inputs
- **Raster Dataset**: Raster dataset to sample
- **Line Geometry**: GeoJSON LineString for profile path
- **Band** (optional): Band to sample (default: first band)
## Outputs
Profile data containing:
- Distance along line
- Pixel values at each point
- Coordinates
## Example
```json
{
"raster_dataset_id": 125,
"line_geometry": {
"type": "LineString",
"coordinates": [ ... ]
}
}
```
## Use Cases
- Elevation profiles
- Transect analysis
- Cross-section extraction
- Value sampling
## Notes
- Line geometry defines sampling path
- Values interpolated for sub-pixel positions
- Results include distance and values
- Processing time depends on line length
## Related Documentation
- [Analysis API](../api/analysis.md)
- [Raster Tools](raster.md)

View File

@ -0,0 +1,52 @@
# Raster Summary
Generate summary statistics for raster datasets.
## Overview
Raster summary calculates comprehensive statistics (min, max, mean, stddev, etc.) for raster datasets.
## Inputs
- **Raster Dataset**: Raster dataset to analyze
- **Band** (optional): Band to analyze (default: all bands)
## Outputs
Summary statistics containing:
- Minimum value
- Maximum value
- Mean value
- Standard deviation
- NoData count
- Valid pixel count
## Example
```json
{
"raster_dataset_id": 125,
"band": 1
}
```
## Use Cases
- Data quality assessment
- Value range identification
- Statistics reporting
- Data exploration
## Notes
- NoData values excluded
- Statistics calculated for valid pixels only
- Large rasters may take time to process
- Results depend on raster data type
## Related Documentation
- [Analysis API](../api/analysis.md)
- [Raster Tools](raster.md)

View File

@ -0,0 +1,57 @@
# Summarize Within
Calculate summary statistics for features within polygon zones.
## Overview
Summarize within calculates statistics (count, sum, average, etc.) for features that fall within polygon zones.
## Inputs
- **Input Dataset**: Point or line dataset to summarize
- **Zone Dataset**: Polygon dataset defining zones
- **Statistics**: Statistics to calculate
## Outputs
New dataset containing:
- Zone polygons
- Summary statistics for each zone
- Zone attributes
## Statistics
- **Count**: Number of features in zone
- **Sum**: Sum of numeric values
- **Average**: Average of numeric values
- **Min/Max**: Minimum/maximum values
## Example
```json
{
"input_dataset_id": 123,
"zone_dataset_id": 124,
"statistics": ["count", "sum", "average"]
}
```
## Use Cases
- Zonal statistics
- Aggregated analysis
- Summary reporting
- Data aggregation
## Notes
- Features must be within zones
- Multiple statistics can be calculated
- Results depend on zone boundaries
- Processing time depends on dataset sizes
## Related Documentation
- [Analysis API](../api/analysis.md)

View File

@ -0,0 +1,63 @@
# Zonal Statistics
Calculate statistics for raster data within polygon zones.
## Overview
Zonal statistics calculates summary statistics (mean, sum, count, etc.) for raster pixel values within polygon zones.
## Inputs
- **Raster Dataset**: Raster dataset to analyze
- **Zone Dataset**: Polygon dataset defining zones
- **Statistics**: Statistics to calculate
## Outputs
New dataset containing:
- Zone polygons
- Raster statistics for each zone
- Zone attributes
## Statistics
- **Mean**: Average pixel value
- **Sum**: Sum of pixel values
- **Count**: Number of pixels
- **Min/Max**: Minimum/maximum pixel values
- **StdDev**: Standard deviation of pixel values
## Example
```json
{
"raster_dataset_id": 125,
"zone_dataset_id": 123,
"statistics": ["mean", "sum", "count"]
}
```
## Background Jobs
This analysis runs as a background job.
## Use Cases
- Land cover analysis
- Elevation statistics
- Climate data aggregation
- Raster value extraction
## Notes
- Zones must overlap raster extent
- Statistics calculated for overlapping pixels
- NoData values excluded
- Processing time depends on raster and zone sizes
## Related Documentation
- [Analysis API](../api/analysis.md)
- [Raster Tools](raster.md)

315
docs/api/analysis.md Normal file
View File

@ -0,0 +1,315 @@
# Analysis API
Endpoints for running spatial analysis tools.
## Hot Spot Analysis
**Endpoint**: `POST /api/analysis_hotspot_run.php`
Run Getis-Ord Gi* hot spot analysis on a dataset.
### Request Body
```json
{
"dataset_id": 123,
"value_field": "population",
"neighbor_type": "distance",
"distance": 1000,
"output_mode": "static"
}
```
### Parameters
- `dataset_id` (required): Dataset ID to analyze
- `value_field` (required): Numeric field to analyze
- `neighbor_type` (optional): "distance" or "knn" (default: "distance")
- `distance` (required if neighbor_type="distance"): Distance threshold in dataset units
- `k_neighbors` (required if neighbor_type="knn"): Number of nearest neighbors
- `output_mode` (optional): "static", "view", or "materialized_view" (default: "static")
### Response
```json
{
"status": "success",
"job_id": 456,
"message": "Hot spot analysis job queued"
}
```
### Background Job
The analysis runs as a background job. Use the `job_id` to check status via [Jobs API](jobs.md).
### Example
```bash
curl -X POST "https://example.com/api/analysis_hotspot_run.php" \
-H "Content-Type: application/json" \
-H "Cookie: PHPSESSID=..." \
-d '{
"dataset_id": 123,
"value_field": "population",
"neighbor_type": "distance",
"distance": 1000
}'
```
## Outlier Analysis
**Endpoint**: `POST /api/analysis/outlier_run.php`
Identify statistical outliers in a dataset.
### Request Body
```json
{
"dataset_id": 123,
"value_field": "income",
"method": "zscore",
"threshold": 2.0
}
```
### Parameters
- `dataset_id` (required): Dataset ID to analyze
- `value_field` (required): Numeric field to analyze
- `method` (optional): "zscore" or "mad" (default: "zscore")
- `threshold` (optional): Z-score threshold or MAD multiplier (default: 2.0)
### Response
```json
{
"status": "success",
"job_id": 457,
"message": "Outlier analysis job queued"
}
```
## KDE (Kernel Density Estimation)
**Endpoint**: `POST /api/analysis_kde.php`
Generate kernel density estimation surface from point data.
### Request Body
```json
{
"dataset_id": 123,
"bandwidth": 1000,
"cell_size": 100,
"weight_field": null
}
```
### Parameters
- `dataset_id` (required): Point dataset ID
- `bandwidth` (optional): Bandwidth in dataset units (default: auto-calculated)
- `cell_size` (optional): Output cell size (default: auto-calculated)
- `weight_field` (optional): Field to weight points
### Response
```json
{
"status": "success",
"job_id": 458,
"message": "KDE analysis job queued"
}
```
## Nearest Neighbor Analysis
**Endpoint**: `POST /api/nearest_run.php`
Find nearest neighbors between two datasets.
### Request Body
```json
{
"source_dataset_id": 123,
"target_dataset_id": 124,
"max_distance": 5000,
"limit": 1
}
```
### Parameters
- `source_dataset_id` (required): Source dataset ID
- `target_dataset_id` (required): Target dataset ID
- `max_distance` (optional): Maximum search distance
- `limit` (optional): Maximum neighbors per feature (default: 1)
### Response
```json
{
"status": "success",
"job_id": 459,
"message": "Nearest neighbor analysis job queued"
}
```
## Dissolve Analysis
**Endpoint**: `POST /api/run_dissolve.php`
Dissolve features based on attribute values.
### Request Body
```json
{
"dataset_id": 123,
"dissolve_field": "category",
"aggregation": {
"population": "sum",
"area": "sum"
}
}
```
### Parameters
- `dataset_id` (required): Dataset ID
- `dissolve_field` (required): Field to dissolve on
- `aggregation` (optional): Aggregation functions for numeric fields
### Response
```json
{
"status": "success",
"job_id": 460,
"message": "Dissolve analysis job queued"
}
```
## Clip Analysis
**Endpoint**: `POST /api/datasets_clip_run.php`
Clip features to a boundary geometry.
### Request Body
```json
{
"dataset_id": 123,
"clip_geometry": {
"type": "Polygon",
"coordinates": [ ... ]
}
}
```
### Parameters
- `dataset_id` (required): Dataset ID to clip
- `clip_geometry` (required): GeoJSON geometry for clipping boundary
### Response
```json
{
"status": "success",
"job_id": 461,
"message": "Clip analysis job queued"
}
```
## Erase Analysis
**Endpoint**: `POST /api/analysis_erase_run.php`
Erase features from a dataset using another dataset.
### Request Body
```json
{
"input_dataset_id": 123,
"erase_dataset_id": 124
}
```
### Parameters
- `input_dataset_id` (required): Input dataset ID
- `erase_dataset_id` (required): Erase dataset ID
### Response
```json
{
"status": "success",
"job_id": 462,
"message": "Erase analysis job queued"
}
```
## Zonal Statistics
**Endpoint**: `POST /api/zonal_stats.php`
Calculate statistics for raster data within polygon zones.
### Request Body
```json
{
"raster_dataset_id": 125,
"zone_dataset_id": 123,
"statistics": ["mean", "sum", "count"]
}
```
### Parameters
- `raster_dataset_id` (required): Raster dataset ID
- `zone_dataset_id` (required): Polygon zone dataset ID
- `statistics` (optional): Statistics to calculate (default: all)
### Response
```json
{
"status": "success",
"job_id": 463,
"message": "Zonal statistics job queued"
}
```
## Output Modes
Analysis results can be stored in different formats:
- **static**: Results stored in a permanent table (default)
- **view**: Results stored as a database view (updates with source data)
- **materialized_view**: Results stored as a materialized view (requires refresh)
## Job Status
All analysis operations run as background jobs. Use the returned `job_id` to check status:
```bash
GET /api/job_status.php?job_id=456
```
See [Jobs API](jobs.md) for details.
## Related Documentation
- [Analysis Tools](../analysis-tools/index.md)
- [Workers](../workers/index.md)
- [Jobs API](jobs.md)

309
docs/api/datasets.md Normal file
View File

@ -0,0 +1,309 @@
# Datasets API
Endpoints for managing and querying spatial datasets.
## List Datasets
**Endpoint**: `GET /api/datasets/list.php`
List all datasets accessible to the current user.
### Parameters
- `has_geometry` (optional): Filter datasets with geometry (0 or 1)
### Response
```json
{
"success": true,
"data": [
{
"id": 123,
"name": "Sample Dataset",
"original_name": "sample.geojson",
"file_type": "geojson",
"metadata": { ... }
}
]
}
```
### Example
```bash
curl -X GET "https://example.com/api/datasets/list.php?has_geometry=1" \
-H "Cookie: PHPSESSID=..."
```
## Get Dataset
**Endpoint**: `GET /api/datasets/get.php`
Get detailed information about a specific dataset.
### Parameters
- `id` (required): Dataset ID
### Response
```json
{
"success": true,
"data": {
"id": 123,
"name": "Sample Dataset",
"original_name": "sample.geojson",
"file_type": "geojson",
"file_size": 1024000,
"description": "Dataset description",
"metadata": {
"feature_count": 1000,
"geometry_type": "Point",
"bbox": [-180, -90, 180, 90]
},
"uploaded_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z"
}
}
```
### Example
```bash
curl -X GET "https://example.com/api/datasets/get.php?id=123" \
-H "Cookie: PHPSESSID=..."
```
## Query Dataset
**Endpoint**: `GET /api/basic/index.php/datasets/{id}/query`
Query features from a dataset with filtering and pagination.
### Parameters
- `page` (optional): Page number (default: 1)
- `limit` (optional): Items per page (default: 100, max: 1000)
- `bbox` (optional): Bounding box filter (minX,minY,maxX,maxY)
- `properties` (optional): Property filters (JSON object)
- `geometry_type` (optional): Filter by geometry type
### Response
```json
{
"success": true,
"data": {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"id": 1,
"properties": { ... },
"geometry": { ... }
}
],
"pagination": {
"page": 1,
"limit": 100,
"total": 1000,
"pages": 10
}
}
}
```
### Example
```bash
curl -X GET "https://example.com/api/basic/index.php/datasets/123/query?page=1&limit=50&bbox=-180,-90,180,90" \
-H "Cookie: PHPSESSID=..."
```
## Get Dataset Properties
**Endpoint**: `GET /api/datasets/get_properties.php`
Get all unique property keys from a dataset.
### Parameters
- `id` (required): Dataset ID
### Response
```json
{
"success": true,
"data": ["property1", "property2", "property3"]
}
```
## Get Property Values
**Endpoint**: `GET /api/datasets/get_property_values.php`
Get unique values for a specific property.
### Parameters
- `id` (required): Dataset ID
- `property` (required): Property name
### Response
```json
{
"success": true,
"data": ["value1", "value2", "value3"]
}
```
## Dataset Filters
**Endpoint**: `GET /api/datasets/filters.php`
Get saved filters for a dataset.
### Parameters
- `id` (required): Dataset ID
### Response
```json
{
"success": true,
"data": {
"filters": [
{
"id": 1,
"name": "Filter Name",
"conditions": { ... }
}
]
}
}
```
## Save Dataset Filters
**Endpoint**: `POST /api/datasets/save_filters.php`
Save filter configuration for a dataset.
### Request Body
```json
{
"id": 123,
"filters": [
{
"name": "Filter Name",
"conditions": {
"property": "value",
"operator": "equals"
}
}
]
}
```
### Response
```json
{
"success": true,
"message": "Filters saved successfully"
}
```
## Dataset Legend
**Endpoint**: `GET /api/datasets/legend.php`
Get legend configuration for a dataset.
### Parameters
- `id` (required): Dataset ID
### Response
```json
{
"success": true,
"data": {
"type": "graduated",
"property": "value",
"stops": [
{ "value": 0, "color": "#0000ff" },
{ "value": 100, "color": "#ff0000" }
]
}
}
```
## Save Dataset Legend
**Endpoint**: `POST /api/datasets/save_legend.php`
Save legend configuration for a dataset.
### Request Body
```json
{
"id": 123,
"legend": {
"type": "graduated",
"property": "value",
"stops": [ ... ]
}
}
```
### Response
```json
{
"success": true,
"message": "Legend saved successfully"
}
```
## Update Dataset Name
**Endpoint**: `POST /api/update_dataset_name.php`
Update the display name of a dataset.
### Request Body
```json
{
"id": 123,
"name": "New Dataset Name"
}
```
### Response
```json
{
"success": true,
"message": "Dataset name updated"
}
```
## Background Jobs
Dataset operations that may take time (imports, analysis) are processed as background jobs. See [Jobs API](jobs.md) for job management.
## Related Documentation
- [Analysis Tools](../analysis-tools/index.md)
- [Architecture Overview](../architecture.md)

159
docs/api/images.md Normal file
View File

@ -0,0 +1,159 @@
# Images API
Proxy API for GeoServer catalog and WMS/WFS services.
## Ping
**Endpoint**: `GET /api/images/index.php/ping`
Check GeoServer connectivity and version.
### Response
```json
{
"about": {
"resource": {
"@class": "aboutVersion",
"version": "2.21.0",
"gitRevision": "...",
"buildDate": "..."
}
}
}
```
### Example
```bash
curl -X GET "https://example.com/api/images/index.php/ping"
```
## Catalog
**Endpoint**: `GET /api/images/index.php/catalog`
List all layers in GeoServer catalog.
### Response
```json
{
"layers": {
"layer": [
{
"name": "layer1",
"href": "http://geoserver/rest/layers/layer1.json"
}
]
}
}
```
## Workspaces
**Endpoint**: `GET /api/images/index.php/workspaces`
List all workspaces.
### Response
```json
{
"workspaces": {
"workspace": [
{
"name": "workspace1",
"href": "http://geoserver/rest/workspaces/workspace1.json"
}
]
}
}
```
## Layers
**Endpoint**: `GET /api/images/index.php/layers`
List all layers (detailed).
### Response
```json
{
"layers": {
"layer": [
{
"name": "layer1",
"type": "VECTOR",
"defaultStyle": { ... },
"resource": { ... }
}
]
}
}
```
## WMS Proxy
**Endpoint**: `GET /api/images/index.php/wms`
Proxy WMS requests to GeoServer.
### Parameters
Standard WMS parameters:
- `service`: WMS
- `version`: 1.1.1 or 1.3.0
- `request`: GetMap, GetFeatureInfo, GetCapabilities, etc.
- `layers`: Layer names
- `styles`: Style names
- `bbox`: Bounding box
- `width`, `height`: Image dimensions
- `format`: Output format
- `srs` or `crs`: Spatial reference system
### Example
```bash
curl -X GET "https://example.com/api/images/index.php/wms?service=WMS&version=1.1.1&request=GetMap&layers=layer1&bbox=-180,-90,180,90&width=800&height=600&format=image/png&srs=EPSG:4326"
```
## WFS Proxy
**Endpoint**: `GET /api/images/index.php/wfs`
Proxy WFS requests to GeoServer.
### Parameters
Standard WFS parameters:
- `service`: WFS
- `version`: 1.0.0, 1.1.0, or 2.0.0
- `request`: GetFeature, GetCapabilities, DescribeFeatureType, etc.
- `typeName`: Feature type name
- `outputFormat`: Output format (GML, GeoJSON, etc.)
### Example
```bash
curl -X GET "https://example.com/api/images/index.php/wfs?service=WFS&version=1.1.0&request=GetFeature&typeName=layer1&outputFormat=application/json"
```
## REST Proxy
**Endpoint**: `GET /api/images/index.php/rest/{path}`
Proxy GeoServer REST API requests (read-only by default).
### Example
```bash
curl -X GET "https://example.com/api/images/index.php/rest/layers/layer1.json"
```
## Related Documentation
- [Server API](server.md)
- [Architecture Overview](../architecture.md)

79
docs/api/index.md Normal file
View File

@ -0,0 +1,79 @@
# API Reference
Aurora GIS provides a comprehensive RESTful API for programmatic access to datasets, analysis tools, and system functionality.
## API Overview
The API is organized into several sections:
- **Basic API**: Dataset listing, details, and GeoJSON queries
- **Server API**: Server information and capabilities
- **Images API**: GeoServer proxy and catalog access
- **Analysis APIs**: Endpoints for running spatial analysis
- **Worker APIs**: Background job management
- **Dataset APIs**: Dataset-specific operations
## Authentication
Most API endpoints require authentication. Authentication is handled via:
- **Session-based**: For web interface requests
- **API Key**: (Optional, if configured)
Unauthenticated requests return `401 Unauthorized`.
Some endpoints support public access for datasets marked as public.
## Base URLs
- **Basic API**: `/api/basic/index.php`
- **Server API**: `/api/server/index.php`
- **Images API**: `/api/images/index.php`
- **Main API**: `/api.php`
- **Dataset APIs**: `/api/datasets/`
- **Analysis APIs**: `/api/analysis/`
## Response Format
All API responses are in JSON format:
```json
{
"success": true,
"data": { ... },
"error": null
}
```
Error responses:
```json
{
"success": false,
"error": "Error message",
"status": 400
}
```
## API Endpoints
```{toctree}
:maxdepth: 2
datasets
analysis
jobs
images
server
```
## Rate Limiting
API requests are subject to rate limiting to ensure system stability. Contact the administrator for rate limit information.
## Related Documentation
- [Architecture Overview](../architecture.md)
- [Analysis Tools](../analysis-tools/index.md)
- [Workers](../workers/index.md)

185
docs/api/jobs.md Normal file
View File

@ -0,0 +1,185 @@
# Jobs API
Endpoints for managing background jobs.
## Get Job Status
**Endpoint**: `GET /api/job_status.php`
Get the current status of a background job.
### Parameters
- `job_id` (required): Job ID
### Response
```json
{
"status": "success",
"job_status": "completed",
"job": {
"id": 456,
"job_type": "hotspot_analysis",
"status": "completed",
"progress": 100,
"params": { ... },
"result": {
"dataset_id": 789,
"dataset_name": "Hot Spot Results",
"table_name": "spatial_data_789"
},
"created_at": "2024-01-01T00:00:00Z",
"started_at": "2024-01-01T00:01:00Z",
"finished_at": "2024-01-01T00:05:00Z"
}
}
```
### Job Statuses
- `queued`: Job is waiting to be processed
- `running`: Job is currently being processed
- `completed`: Job completed successfully
- `failed`: Job failed with an error
### Example
```bash
curl -X GET "https://example.com/api/job_status.php?job_id=456" \
-H "Cookie: PHPSESSID=..."
```
## Cancel Job
**Endpoint**: `POST /api/job_cancel.php`
Cancel a queued or running job.
### Request Body
```json
{
"job_id": 456
}
```
### Response
```json
{
"status": "success",
"message": "Job cancelled successfully"
}
```
### Notes
- Only queued or running jobs can be cancelled
- Completed or failed jobs cannot be cancelled
- Users can only cancel their own jobs (admins can cancel any job)
## List User Jobs
**Endpoint**: `GET /api/jobs/status.php`
List all jobs for the current user.
### Parameters
- `status` (optional): Filter by status (queued, running, completed, failed)
- `job_type` (optional): Filter by job type
- `limit` (optional): Maximum results (default: 50)
- `offset` (optional): Result offset (default: 0)
### Response
```json
{
"status": "success",
"jobs": [
{
"id": 456,
"job_type": "hotspot_analysis",
"status": "completed",
"progress": 100,
"created_at": "2024-01-01T00:00:00Z",
"finished_at": "2024-01-01T00:05:00Z"
}
],
"total": 10,
"limit": 50,
"offset": 0
}
```
### Example
```bash
curl -X GET "https://example.com/api/jobs/status.php?status=running&limit=10" \
-H "Cookie: PHPSESSID=..."
```
## Job Result Structure
Completed jobs include a `result` field with job-specific information:
### Hot Spot Analysis Result
```json
{
"dataset_id": 789,
"dataset_name": "Hot Spot Results",
"table_name": "spatial_data_789",
"row_count": 1000,
"storage_type": "table"
}
```
### Outlier Analysis Result
```json
{
"dataset_id": 790,
"dataset_name": "Outlier Results",
"table_name": "spatial_data_790",
"row_count": 50,
"outlier_count": 50
}
```
### Nearest Analysis Result
```json
{
"dataset_id": 791,
"dataset_name": "Nearest Results",
"table_name": "spatial_data_791",
"row_count": 500,
"source_dataset_id": 123,
"target_dataset_id": 124
}
```
## Error Handling
Failed jobs include an `error_message` field:
```json
{
"status": "failed",
"error_message": "Dataset not found",
"job": {
"id": 456,
"status": "failed",
"error_message": "Dataset not found"
}
}
```
## Related Documentation
- [Analysis API](analysis.md)
- [Workers](../workers/index.md)
- [Architecture Overview](../architecture.md)

55
docs/api/server.md Normal file
View File

@ -0,0 +1,55 @@
# Server API
Endpoints for server information and capabilities.
## Server Information
**Endpoint**: `GET /api/server/index.php`
Get server information and API capabilities.
### Response
```json
{
"name": "Aurora GIS",
"version": "1.0.0",
"capabilities": {
"datasets": true,
"analysis": true,
"raster": true,
"workers": true
},
"endpoints": {
"datasets": "/api/datasets/",
"analysis": "/api/analysis/",
"jobs": "/api/jobs/"
}
}
```
## Basic API Information
**Endpoint**: `GET /api/basic/index.php`
Get basic API information and available endpoints.
### Response
```json
{
"name": "Basic API",
"version": "1.0.0",
"endpoints": {
"datasets": "/api/basic/index.php/datasets",
"query": "/api/basic/index.php/datasets/{id}/query"
}
}
```
## Related Documentation
- [Datasets API](datasets.md)
- [Analysis API](analysis.md)
- [Architecture Overview](../architecture.md)

353
docs/architecture.md Normal file
View File

@ -0,0 +1,353 @@
# Architecture Overview
This document provides a comprehensive overview of the Aurora GIS architecture, including system components, data flows, and design patterns.
## System Architecture
Aurora GIS follows a modular architecture with clear separation between:
- **Frontend**: PHP-based web interface with JavaScript for interactivity
- **Backend**: PHP application layer with PostgreSQL/PostGIS database
- **Workers**: Background job processing system
- **API**: RESTful API layer for programmatic access
- **Analysis Engine**: Spatial analysis tools and algorithms
## Core Components
### 1. Dataset Engine
The dataset engine is the core component responsible for managing spatial datasets.
#### Data Storage Model
Each dataset is stored in its own table following the naming convention `spatial_data_{dataset_id}`:
```sql
CREATE TABLE spatial_data_{id} (
id SERIAL PRIMARY KEY,
feature_id TEXT,
geometry_type TEXT,
properties JSONB,
geometry JSONB,
geom GEOMETRY,
created_at TIMESTAMP DEFAULT NOW()
);
```
**Benefits:**
- Better performance with large numbers of datasets
- Easier data management and cleanup
- Improved query performance for individual datasets
- Reduced table size and index overhead
#### Dataset Metadata
Dataset metadata is stored in the `spatial_files` table:
- File information (name, path, type, size)
- User-provided description
- Extracted metadata (JSONB)
- Access permissions
- Creation and update timestamps
#### PostGIS Integration
- All spatial data stored as PostGIS `GEOMETRY` type
- Automatic SRID handling (default: 4326)
- Spatial indexes using GiST for performance
- Support for all PostGIS geometry types
### 2. Background Jobs System
The background jobs system enables asynchronous processing of long-running operations.
#### Job Queue
Jobs are stored in the `background_jobs` table:
```sql
CREATE TABLE background_jobs (
id SERIAL PRIMARY KEY,
user_id INTEGER,
job_type TEXT,
params JSONB,
status TEXT, -- 'queued', 'running', 'completed', 'failed'
result JSONB,
error_message TEXT,
progress INTEGER,
created_at TIMESTAMP,
started_at TIMESTAMP,
finished_at TIMESTAMP
);
```
#### Job Lifecycle
1. **Enqueue**: Job created with status 'queued'
2. **Fetch**: Worker fetches next job using `FOR UPDATE SKIP LOCKED`
3. **Process**: Worker updates status to 'running' and processes job
4. **Complete**: Worker updates status to 'completed' with results
5. **Error**: On failure, status set to 'failed' with error message
#### Worker Architecture
Workers are long-running PHP CLI scripts that:
- Poll the database for queued jobs
- Process jobs of a specific type
- Handle errors gracefully
- Log progress and results
- Run continuously until stopped
See [Workers Documentation](workers/index.md) for details on each worker.
### 3. Analysis Tools
Aurora GIS provides a comprehensive suite of spatial analysis tools.
#### Vector Analysis Tools
- **Hot Spot Analysis**: Getis-Ord Gi* statistics for identifying clusters
- **Outlier Detection**: Z-score and MAD-based outlier identification
- **KDE (Kernel Density Estimation)**: Density surface generation
- **Clustering**: Spatial clustering algorithms
- **Proximity Analysis**: Buffer, nearest neighbor, distance calculations
- **Overlay Operations**: Intersect, union, erase, join
#### Raster Analysis Tools
- **Zonal Statistics**: Calculate statistics within polygon zones
- **Raster Histogram**: Analyze pixel value distributions
- **Raster Summary**: Generate summary statistics
- **Raster Profile**: Extract values along a line
- **Raster Conversion**: Convert between formats
- **Raster Comparison**: Compare two raster datasets
See [Analysis Tools Documentation](analysis-tools/index.md) for details.
### 4. API Layer
The API layer provides RESTful access to datasets and analysis tools.
#### API Structure
- **Basic API** (`/api/basic/index.php`): Dataset listing, details, GeoJSON queries
- **Server API** (`/api/server/index.php`): Server information and capabilities
- **Images API** (`/api/images/index.php`): GeoServer proxy and catalog
- **Analysis APIs**: Endpoints for running analysis tools
- **Worker APIs**: Endpoints for job management
#### Authentication
- Session-based authentication for web interface
- API key authentication (optional)
- Dataset-level access control
- Public dataset access (configurable)
See [API Documentation](api/index.md) for endpoint details.
### 5. PostGIS Data Flows
#### Import Flow
```
Uploaded File
Format Detection
Geometry Extraction
PostGIS Processing
spatial_data_{id} Table
Spatial Index Creation
Metadata Extraction
spatial_files Record
```
#### Analysis Flow
```
User Request
Job Enqueue
Worker Fetch
PostGIS Analysis
Result Table/View
Job Complete
User Notification
```
#### Export Flow
```
Dataset Selection
Query PostGIS Table
Format Conversion
GeoJSON/Shapefile/CSV
Download
```
## Data Processing Pipeline
### File Upload Processing
1. **File Validation**: Check file type, size, and format
2. **Geometry Extraction**: Parse geometry from source format
3. **SRID Detection**: Identify or assign spatial reference system
4. **Table Creation**: Create `spatial_data_{id}` table
5. **Data Import**: Insert features into PostGIS table
6. **Index Creation**: Create spatial and attribute indexes
7. **Metadata Extraction**: Extract and store metadata
8. **Registration**: Create `spatial_files` record
### Analysis Processing
1. **Parameter Validation**: Validate input parameters
2. **Job Creation**: Enqueue background job
3. **Worker Processing**: Worker fetches and processes job
4. **PostGIS Execution**: Run spatial analysis queries
5. **Result Storage**: Store results in table/view
6. **Metadata Update**: Update job status and results
7. **User Notification**: Notify user of completion
## Database Schema
### Core Tables
- **spatial_files**: Dataset metadata and file information
- **spatial_data_{id}**: Individual dataset tables (dynamic)
- **background_jobs**: Job queue and status
- **user**: User accounts and authentication
- **access_group**: Access control groups
- **user_access**: User-group associations
- **dataset_permissions**: Dataset-level permissions
### Supporting Tables
- **ogc_connections**: External PostGIS connections
- **scheduled_imports**: Scheduled URL imports
- **map_views**: Saved map configurations
- **dashboards**: Dashboard definitions
- **presentations**: Presentation configurations
- **categories_keywords**: Dataset categorization
## Security Architecture
### Authentication
- Session-based authentication
- OAuth support (GitHub, Google, Microsoft)
- Password hashing (bcrypt)
- Session management
### Authorization
- Role-based access control (Admin, User, Publisher)
- Dataset-level permissions
- Access group management
- Public dataset access (optional)
### Data Security
- SQL injection prevention (prepared statements)
- XSS protection (output escaping)
- File upload validation
- Path traversal prevention
- Secure file storage
## Performance Optimizations
### Database Optimizations
- Spatial indexes (GiST) on geometry columns
- Attribute indexes on frequently queried fields
- Connection pooling (PgBouncer support)
- Query optimization and caching
- Materialized views for complex queries
### Application Optimizations
- Lazy loading of map components
- Pagination for large datasets
- Background job processing
- Caching of metadata and configurations
- Efficient JSONB storage
### Worker Optimizations
- Parallel job processing (multiple workers)
- Job prioritization
- Resource limits and timeouts
- Error handling and retry logic
## Scalability Considerations
### Horizontal Scaling
- Stateless application design
- Database connection pooling
- Worker scaling (multiple worker instances)
- Load balancing support
### Vertical Scaling
- Database query optimization
- Index optimization
- Memory management
- Worker resource allocation
## Integration Points
### External Services
- **GeoServer**: WMS/WFS services
- **QGIS Server**: QGIS project rendering
- **pg_tileserv**: Vector tile generation
- **OAuth Providers**: Authentication
- **S3**: Cloud storage for large files
### Data Sources
- **PostGIS Remote**: External PostGIS databases
- **URL Imports**: Web-accessible spatial data
- **File Uploads**: Local file uploads
- **Overture Maps**: Parquet file imports
- **S3 Buckets**: Cloud-based data sources
## Monitoring and Logging
### Application Logging
- Error logging to files
- Worker-specific logs
- Import operation logs
- API access logs
### Database Monitoring
- Query performance monitoring
- Connection pool monitoring
- Table size monitoring
- Index usage statistics
## Related Documentation
- [Installation Guide](installation.md)
- [Configuration Guide](configuration.md)
- [API Documentation](api/index.md)
- [Workers Documentation](workers/index.md)
- [Analysis Tools Documentation](analysis-tools/index.md)

120
docs/changelog.md Normal file
View File

@ -0,0 +1,120 @@
# Changelog
All notable changes to Aurora GIS will be documented in this file.
## [Unreleased]
### Added
- Comprehensive ReadTheDocs documentation
- Sphinx documentation with MyST Markdown
- API documentation for all endpoints
- Worker documentation
- Analysis tools documentation
- UI components documentation
## [1.0.0] - 2024-01-01
### Added
- Initial release of Aurora GIS
- Dataset management system
- Spatial analysis tools
- Background job processing
- RESTful API
- Interactive map viewer
- Dashboard builder
- Support for multiple data formats (GeoJSON, Shapefile, KML, CSV, GeoTIFF)
- PostGIS integration
- OAuth authentication (GitHub, Google, Microsoft)
- URL-based imports
- PostGIS remote connections
- Overture Maps integration
- S3 bucket imports
- QGIS project support
- Raster analysis tools
- Vector analysis tools
- Hot spot analysis
- Outlier detection
- KDE (Kernel Density Estimation)
- Clustering
- Zonal statistics
- Background workers for long-running operations
- Dashboard widgets
- Presentation builder
### Features
#### Data Management
- File upload with drag-and-drop
- Multiple format support
- Metadata extraction
- Dataset versioning
- Access control and permissions
#### Spatial Analysis
- Hot spot analysis (Getis-Ord Gi*)
- Outlier detection (z-score, MAD)
- KDE (Kernel Density Estimation)
- Clustering algorithms
- Buffer analysis
- Nearest neighbor analysis
- Intersect, union, erase operations
- Join operations
- Dissolve operations
- Clip operations
- Summarize within
- Zonal statistics
#### Raster Analysis
- Zonal statistics
- Raster histogram
- Raster summary
- Raster profile
- Raster conversion
- Raster comparison
#### Background Processing
- Asynchronous job processing
- Worker-based architecture
- Job status tracking
- Progress monitoring
#### Visualization
- Interactive Leaflet.js maps
- OpenLayers integration
- Dashboard builder
- Chart generation
- Layer styling
- Legend management
#### API
- RESTful API endpoints
- Dataset management API
- Analysis API
- Job management API
- GeoServer proxy API
## Architecture
### Core Components
- Dataset engine with PostGIS
- Background jobs system
- Analysis tools suite
- API layer
- Web interface
### Technology Stack
- PHP 7.4+
- PostgreSQL 12+ with PostGIS
- Bootstrap 5
- Leaflet.js / OpenLayers
- Chart.js / Plotly
## Related Documentation
- [Installation Guide](installation.md)
- [Configuration Guide](configuration.md)
- [Architecture Overview](architecture.md)
- [API Reference](api/index.md)
- [Workers Documentation](workers/index.md)
- [Analysis Tools](analysis-tools/index.md)

58
docs/conf.py Normal file
View File

@ -0,0 +1,58 @@
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'Aurora GIS'
copyright = '2024, Aurora GIS Team'
author = 'Aurora GIS Team'
release = '1.0.0'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = [
'myst_parser',
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinx.ext.intersphinx',
]
templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for MyST Markdown -----------------------------------------------
# https://myst-parser.readthedocs.io/en/latest/configuration.html
myst_enable_extensions = [
"colon_fence",
"deflist",
"dollarmath",
"fieldlist",
"html_admonition",
"html_image",
"linkify",
"replacements",
"smartquotes",
"strikethrough",
"substitution",
"tasklist",
]
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'sphinx_rtd_theme'
html_static_path = ['_static']
html_logo = None
html_favicon = None
# -- Intersphinx configuration -----------------------------------------------
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'postgis': ('https://postgis.net/documentation/', None),
}

311
docs/configuration.md Normal file
View File

@ -0,0 +1,311 @@
# Configuration Guide
This guide covers all configuration options available in Aurora GIS.
## Configuration Files
### Primary Configuration: `config/const.php`
This file contains the core application constants. It is created during initialization and should not be edited manually unless necessary.
```php
const DB_HOST = 'localhost'; // PostgreSQL host
const DB_NAME = 'aurora_gis'; // Database name
const DB_USER = 'aurora_user'; // Database username
const DB_PASS = 'your_password'; // Database password
const DB_PORT = '5432'; // Database port
const DATA_DIR = '/var/www/data'; // Data directory for file storage
const SESS_USR_KEY = 'dc_user'; // Session key for user data
const SUPER_ADMIN_ID = 1; // ID of super admin user
```
### Database Configuration: `config/database.php`
This file handles database connections and connection pooling settings.
Key settings:
- **PDO Error Mode**: Set to `ERRMODE_EXCEPTION` for error handling
- **Prepared Statements**: Uses emulated prepares for PgBouncer compatibility
- **Statement Timeout**: 30 seconds (30000ms)
- **Idle Transaction Timeout**: 15 seconds (15000ms)
## Authentication Configuration
### OAuth Providers
Configure OAuth providers in `config/const.php`:
```php
const DISABLE_OAUTH_USER_CREATION = false; // Set to true to disable auto user creation
const GITHUB_CLIENT_ID = 'your_github_client_id';
const GITHUB_CLIENT_SECRET = 'your_github_client_secret';
const GOOGLE_CLIENT_ID = 'your_google_client_id';
const GOOGLE_CLIENT_SECRET = 'your_google_client_secret';
const MICROSOFT_CLIENT_ID = 'your_microsoft_client_id';
const MICROSOFT_CLIENT_SECRET = 'your_microsoft_client_secret';
const MICROSOFT_TENANT_ID = 'your_microsoft_tenant_id';
```
### OAuth Setup
1. **GitHub OAuth**:
- Go to GitHub Settings > Developer settings > OAuth Apps
- Create a new OAuth App
- Set Authorization callback URL: `https://your-domain/auth-github.php`
- Copy Client ID and Client Secret
2. **Google OAuth**:
- Go to Google Cloud Console > APIs & Services > Credentials
- Create OAuth 2.0 Client ID
- Add authorized redirect URI: `https://your-domain/auth-google.php`
- Copy Client ID and Client Secret
3. **Microsoft OAuth**:
- Go to Azure Portal > App registrations
- Create new registration
- Add redirect URI: `https://your-domain/auth-microsoft.php`
- Copy Application (client) ID, Directory (tenant) ID, and Client secret
## Data Directory Configuration
The `DATA_DIR` constant specifies where uploaded files and processed data are stored:
```php
const DATA_DIR = '/var/www/data';
```
Ensure this directory:
- Exists and is writable by the web server user
- Has sufficient disk space
- Has proper permissions (755 for directories, 644 for files)
Subdirectories created automatically:
- `uploads/` - Uploaded files
- `uploads/geoserver_documents/` - GeoServer documents
- `uploads/tabular/` - Tabular data files
- `uploads/raster/` - Raster files
- `uploads/qgis/` - QGIS projects
- `logs/` - Application logs
## Database Settings
### Connection Pooling (PgBouncer)
If using PgBouncer for connection pooling, the application uses emulated prepared statements:
```php
PDO::ATTR_EMULATE_PREPARES => true
```
### Timeout Settings
Configured in `config/database.php`:
```php
$pdo->exec("SET statement_timeout = 30000"); // 30 seconds
$pdo->exec("SET idle_in_transaction_session_timeout = 15000"); // 15 seconds
```
Adjust these values based on your workload:
- Increase `statement_timeout` for long-running queries
- Decrease `idle_in_transaction_session_timeout` to prevent connection leaks
## Application Settings
Application settings are stored in the `app_settings` table and can be managed via the admin interface or directly in the database.
### Common Settings
Access via `includes/settings.php` functions:
```php
get_app_setting($pdo, 'setting_key', $default);
set_app_setting($pdo, 'setting_key', 'value');
```
### System Settings Page
Access system settings via the admin interface at `/system_settings.php`:
- **Site Name**: Display name for the application
- **Default Basemap**: Default map tile provider
- **Max Upload Size**: Maximum file upload size
- **Enable Public Access**: Allow anonymous dataset access
- **Email Settings**: SMTP configuration for notifications
## Worker Configuration
Background workers are configured via systemd service files in the `systemd/` directory.
### Worker Service Files
Each worker has a corresponding `.service` file:
- `hotspot_worker.service` - Hotspot analysis worker
- `outlier_worker.service` - Outlier analysis worker
- `nearest_worker.service` - Nearest neighbor analysis worker
- `dissolve_worker.service` - Dissolve operations worker
- `clip_worker.service` - Clip operations worker
- `raster_clip_worker.service` - Raster clip operations worker
### Configuring Workers
Edit the service file to set:
- Working directory
- PHP path
- User/group
- Environment variables
- Resource limits
Example service file:
```ini
[Unit]
Description=Hotspot Analysis Worker
After=network.target postgresql.service
[Service]
Type=simple
User=www-data
WorkingDirectory=/var/www/html/aurora-gis
ExecStart=/usr/bin/php workers/hotspot_analysis_worker.php
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
```
## GeoServer Configuration
If using GeoServer for WMS/WFS services:
1. Configure GeoServer connection in `config/const.php` or environment variables
2. Set GeoServer admin credentials
3. Configure workspace and data stores
4. Enable required services (WMS, WFS, WCS)
## QGIS Server Configuration
For QGIS project rendering:
1. Install QGIS Server (see Installation Guide)
2. Configure QGIS Server settings in `mapproxy_settings.php`
3. Set QGIS Server URL in application settings
4. Ensure QGIS projects are accessible to QGIS Server
## pg_tileserv Configuration
For vector tile generation:
1. Install and configure pg_tileserv
2. Ensure PostGIS tables have proper SRID constraints
3. Configure pg_tileserv to discover tables automatically
4. Set pg_tileserv URL in application settings
## Security Configuration
### File Upload Security
- File type validation is enforced
- File size limits can be configured
- Uploaded files are stored outside the web root when possible
- File names are sanitized to prevent path traversal
### Database Security
- Use prepared statements (automatic via PDO)
- Database credentials stored in `config/const.php` (protect this file)
- User access controlled via `access_group` and `user_access` tables
- Dataset-level permissions via `dataset_permissions` table
### Session Security
- Session key configured via `SESS_USR_KEY` constant
- Session cookies should be HTTP-only and secure in production
- Configure session timeout in `php.ini`
## Performance Tuning
### PostgreSQL Tuning
Key PostgreSQL settings for optimal performance:
```sql
-- Increase shared buffers
shared_buffers = 256MB
-- Increase work memory for complex queries
work_mem = 16MB
-- Enable parallel queries
max_parallel_workers_per_gather = 4
-- Optimize for spatial queries
random_page_cost = 1.1 # For SSD storage
```
### PHP Tuning
In `php.ini`:
```ini
memory_limit = 512M
max_execution_time = 300
upload_max_filesize = 100M
post_max_size = 100M
```
### Application Tuning
- Enable OPcache for PHP
- Use connection pooling (PgBouncer)
- Configure appropriate worker counts
- Monitor and optimize slow queries
## Environment-Specific Configuration
### Development
- Enable error display: `ini_set('display_errors', 1)`
- Use verbose logging
- Disable caching
- Use test database
### Production
- Disable error display: `ini_set('display_errors', 0)`
- Enable error logging
- Use production database
- Enable caching
- Use HTTPS only
- Configure proper backup strategy
## Monitoring and Logging
### Application Logs
Logs are stored in the `logs/` directory:
- `error.log` - PHP errors
- `worker_*.log` - Worker-specific logs
- `import_*.log` - Import operation logs
### Database Logging
Enable PostgreSQL logging:
```conf
# In postgresql.conf
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d.log'
log_statement = 'all' # or 'mod' for modifications only
```
## Related Documentation
- [Installation Guide](installation.md)
- [Architecture Overview](architecture.md)
- [Workers Documentation](workers/index.md)

257
docs/dashboard.md Normal file
View File

@ -0,0 +1,257 @@
# Dashboard
Create custom interactive dashboards with multiple widgets to visualize and analyze spatial data.
## Overview
Dashboards provide a flexible, drag-and-drop interface for building custom data visualization layouts. Combine maps, charts, tables, and analysis widgets to create comprehensive data views.
## Creating Dashboards
### Access
- Navigate to Dashboard Builder (`dashboard_builder.php`)
- Only admins can create new dashboards
- Users with edit permissions can modify existing dashboards
### Building Process
1. **Add Widgets**: Drag widgets from the sidebar onto the canvas
2. **Configure Widgets**: Click widgets to configure data sources and settings
3. **Arrange Layout**: Drag widgets to reposition, resize from corners
4. **Save Dashboard**: Save configuration and assign permissions
## Widget Types
### Map Widget
Interactive map display with multiple layers.
**Configuration**:
- Dataset selection
- Basemap selection
- Layer styling
- Initial extent
- Interaction settings
**Features**:
- Pan and zoom
- Feature identification
- Layer visibility toggle
- Popup configuration
### Chart Widget
Data visualization charts.
**Chart Types**:
- **Bar Chart**: Categorical comparisons
- **Line Chart**: Time series or trends
- **Pie Chart**: Proportional data
- **Scatter Plot**: Correlation analysis
**Configuration**:
- Dataset selection
- X and Y axis fields
- Aggregation functions
- Chart styling
- Update intervals
### Table Widget
Data table display with sorting and filtering.
**Features**:
- Column selection
- Sorting by columns
- Filtering
- Pagination
- Export options
**Configuration**:
- Dataset selection
- Visible columns
- Default sort
- Page size
- Row limit
### Counter Widget
Display summary statistics.
**Functions**:
- **Count**: Number of features
- **Sum**: Sum of numeric values
- **Average**: Mean value
- **Min/Max**: Minimum/maximum values
**Configuration**:
- Dataset selection
- Calculation function
- Value field (for sum/avg/min/max)
- Filter conditions
- Formatting options
### Filter Widget
Dataset filter controls for interactive filtering.
**Filter Types**:
- **Property Filters**: Filter by attribute values
- **Spatial Filters**: Filter by location
- **Date Filters**: Filter by date ranges
- **Numeric Filters**: Filter by numeric ranges
**Features**:
- Synchronize filters across widgets
- Real-time updates
- Save filter presets
- Clear filters
### Vector Analysis Widget
Tabular statistics for vector datasets.
**Statistics**:
- Feature count
- Geometry type distribution
- Attribute summaries
- Spatial extent
**Configuration**:
- Dataset selection
- Statistics to display
- Grouping options
### Raster Analysis Widget
Tabular statistics for raster datasets.
**Statistics**:
- Pixel count
- Value ranges
- Band information
- NoData statistics
**Configuration**:
- Raster dataset selection
- Statistics to display
- Band selection
### Hot Spot Summary Widget
Hot spot analysis summary statistics.
**Information**:
- Total features analyzed
- Hot spot count by class
- Cold spot count by class
- Significance distribution
**Configuration**:
- Hot spot dataset selection
- Class breakdown
- Summary format
### Outlier Summary Widget
Outlier analysis summary statistics.
**Information**:
- Total features
- Outlier count
- Outlier percentage
- Method used (z-score/MAD)
**Configuration**:
- Outlier dataset selection
- Summary format
## Dashboard Features
### Layout Management
- **Drag and Drop**: Reposition widgets by dragging
- **Resize**: Resize widgets from corners
- **Grid System**: Snap to grid for alignment
- **Responsive**: Adapts to different screen sizes
### Configuration
- **Widget Settings**: Configure each widget individually
- **Data Sources**: Link widgets to datasets
- **Styling**: Customize colors, fonts, sizes
- **Update Intervals**: Set refresh rates for live data
### Sharing and Permissions
- **Public Dashboards**: Share via public URL
- **Access Control**: Set permissions per user/group
- **Embed Codes**: Embed dashboards in external sites
- **Export**: Export dashboard configuration
### Viewing Dashboards
- **Full Screen**: View dashboards in full-screen mode
- **Print**: Print-friendly layouts
- **Export**: Export dashboard as image/PDF
- **Mobile**: Responsive mobile views
## Use Cases
### Data Monitoring
- Real-time data monitoring
- Key performance indicators
- Status dashboards
- Alert systems
### Analysis Results
- Analysis result displays
- Statistical summaries
- Trend visualizations
- Comparative analysis
### Public Portals
- Public data portals
- Community dashboards
- Open data displays
- Information kiosks
### Executive Summaries
- High-level overviews
- Executive reports
- Strategic dashboards
- Performance metrics
## Example Dashboard
A typical dashboard might include:
1. **Map Widget**: Showing geographic distribution
2. **Counter Widgets**: Key statistics (total features, average values)
3. **Chart Widget**: Trend analysis over time
4. **Table Widget**: Detailed data view
5. **Filter Widget**: Interactive filtering controls
## API Access
Dashboards can be accessed programmatically:
```bash
# View dashboard
GET /view_dashboard.php?id={dashboard_id}
# Dashboard API
GET /dashboard_api.php?action=get&id={dashboard_id}
```
## Related Documentation
- [Dashboard Builder UI](../ui/dashboard-builder.md)
- [Map Viewer](../ui/map-viewer.md)
- [Analysis Tools](../analysis-tools/index.md)

262
docs/import/esri.md Normal file
View File

@ -0,0 +1,262 @@
# ESRI/ArcGIS Import
Import spatial data from ArcGIS Server and ArcGIS Online services.
## Overview
ESRI/ArcGIS import allows you to harvest and import data from ArcGIS REST services, including MapServer and FeatureServer endpoints.
## Supported Services
### MapServer
- **Description**: ArcGIS MapServer REST service
- **Features**: Map layers, feature layers, group layers
- **Use Case**: Static map services, published datasets
### FeatureServer
- **Description**: ArcGIS FeatureServer REST service
- **Features**: Editable feature layers, queries
- **Use Case**: Dynamic data, editable services
### ArcGIS Online
- **Description**: ArcGIS Online hosted services
- **Features**: Public and private services
- **Use Case**: Cloud-hosted datasets
## Import Methods
### Service Browser
Browse and import from ArcGIS services:
1. Navigate to ESRI browser page
2. Enter service URL
3. Browse available layers
4. Select layer to import
5. Configure import options
6. Click "Import"
### Direct URL Import
Import directly from service URL:
1. Navigate to URL import page
2. Enter ArcGIS service URL
3. System detects service type
4. Configure import options
5. Click "Import"
## Service URL Format
### MapServer
```
https://server/arcgis/rest/services/ServiceName/MapServer
https://server/arcgis/rest/services/ServiceName/MapServer/{layerId}
```
### FeatureServer
```
https://server/arcgis/rest/services/ServiceName/FeatureServer
https://server/arcgis/rest/services/ServiceName/FeatureServer/{layerId}
```
### ArcGIS Online
```
https://services.arcgis.com/{orgId}/arcgis/rest/services/{serviceName}/FeatureServer
```
## Import Process
### Step 1: Service Discovery
System discovers service information:
- Service type (MapServer/FeatureServer)
- Available layers
- Layer metadata
- Spatial reference system
### Step 2: Layer Selection
Select layer to import:
- Browse available layers
- View layer metadata
- Check geometry types
- Verify attribute fields
### Step 3: Query Configuration
Configure data query:
- **Where Clause**: SQL WHERE clause for filtering
- **Out Fields**: Fields to include
- **Spatial Filter**: Bounding box or geometry
- **Max Records**: Maximum features to import
### Step 4: Authentication
If required, provide credentials:
- **Username/Password**: ArcGIS credentials
- **Token**: ArcGIS token (auto-generated)
- **Anonymous**: For public services
### Step 5: Data Harvesting
Data is harvested from service:
- Paginated queries (1000 features per batch)
- Geometry conversion (ArcGIS JSON to GeoJSON)
- Attribute extraction
- Coordinate transformation
### Step 6: Import
Harvested data is imported:
- Table creation: `spatial_data_{file_id}`
- Geometry conversion to PostGIS
- Spatial index creation
- Metadata storage
## Query Parameters
### Where Clause
SQL WHERE clause for filtering:
```sql
OBJECTID > 1000
Category = 'Residential'
Population > 50000
```
### Out Fields
Comma-separated list of fields:
```
OBJECTID,Name,Category,Population
* -- All fields
```
### Spatial Filter
Bounding box or geometry:
```json
{
"geometry": {
"xmin": -180,
"ymin": -90,
"xmax": 180,
"ymax": 90,
"spatialReference": {"wkid": 4326}
},
"geometryType": "esriGeometryEnvelope"
}
```
## Authentication
### Public Services
No authentication required for public services.
### Secured Services
For secured services, provide:
- **Username**: ArcGIS username
- **Password**: ArcGIS password
- **Token**: Auto-generated from credentials
### Token Management
- Tokens auto-generated from credentials
- Tokens cached for session
- Token refresh handled automatically
## Scheduled Imports
Set up recurring imports:
1. Configure ArcGIS import
2. Set schedule (daily, weekly, monthly)
3. Configure update mode
4. Save schedule
**Update Modes**:
- **Replace**: Replace all data
- **Append**: Add new data
- **Upsert**: Update existing, insert new
## Metadata Harvesting
System harvests comprehensive metadata:
- Service information
- Layer metadata
- Field definitions
- Spatial reference
- Extent information
## Example: MapServer Import
```bash
# Via browser
1. Navigate to ESRI browser
2. Enter: https://server/arcgis/rest/services/ServiceName/MapServer
3. Select layer
4. Configure query
5. Click "Harvest"
```
## Example: FeatureServer Import
```json
{
"service_url": "https://server/arcgis/rest/services/ServiceName/FeatureServer/0",
"where": "Category = 'Residential'",
"out_fields": "*",
"max_records": 10000,
"auth_username": "user",
"auth_password": "pass"
}
```
## Troubleshooting
### Common Issues
**Service not accessible**
- Verify service URL
- Check network connectivity
- Verify service is public or provide credentials
**Authentication failed**
- Verify username/password
- Check service permissions
- Verify token endpoint accessible
**No features returned**
- Check WHERE clause syntax
- Verify layer has data
- Check spatial filter bounds
**Import timeout**
- Reduce max records
- Use spatial filter to limit data
- Consider scheduled import for large datasets
**Geometry errors**
- Verify spatial reference system
- Check for invalid geometries
- Verify geometry type compatibility
## Related Documentation
- [Vector Import](vector.md)
- [Raster Import](raster.md)
- [PostGIS Import](postgis.md)
- [URL Import](vector.md#url-import)

55
docs/import/index.md Normal file
View File

@ -0,0 +1,55 @@
# Data Import Guide
Aurora GIS supports multiple methods for importing spatial data into the system.
## Overview
Data can be imported from various sources:
- **File Uploads**: Direct file uploads from your computer
- **URL Imports**: Import from web-accessible URLs
- **PostGIS Remote**: Connect to external PostGIS databases
- **ESRI/ArcGIS**: Import from ArcGIS services
- **S3 Buckets**: Import from cloud storage
- **Overture Maps**: Import from Overture Maps parquet files
## Import Methods
```{toctree}
:maxdepth: 2
vector
raster
postgis
esri
```
## General Import Process
1. **Select Import Method**: Choose the appropriate import method
2. **Configure Source**: Provide source location and credentials
3. **Set Parameters**: Configure import options (SRID, filters, etc.)
4. **Execute Import**: Start the import process
5. **Monitor Progress**: Track import status
6. **Verify Results**: Check imported dataset
## Import Options
### Update Modes
- **Replace**: Replace existing data
- **Append**: Add to existing data
- **Upsert**: Update existing, insert new
### Scheduling
- **Immediate**: Import immediately
- **Scheduled**: Schedule for later execution
- **Recurring**: Set up recurring imports
## Related Documentation
- [Installation Guide](../installation.md)
- [Configuration Guide](../configuration.md)
- [Architecture Overview](../architecture.md)

190
docs/import/postgis.md Normal file
View File

@ -0,0 +1,190 @@
# PostGIS Remote Import
Import spatial data from external PostGIS databases.
## Overview
PostGIS remote import allows you to connect to external PostgreSQL/PostGIS databases and import spatial tables as datasets in Aurora GIS.
## Connection Setup
### Create Connection
1. Navigate to PostGIS import page
2. Click "New Connection"
3. Enter connection details:
- **Host**: Database server address
- **Port**: Database port (default: 5432)
- **Database**: Database name
- **Username**: Database username
- **Password**: Database password
4. Test connection
5. Save connection
### Connection Management
- **Save Connections**: Store credentials securely (encrypted)
- **Test Connections**: Verify connectivity before import
- **Delete Connections**: Remove saved connections
## Import Process
### Step 1: Select Connection
Choose a saved PostGIS connection or enter new connection details.
### Step 2: Browse Database
Browse available schemas and tables:
- **Schemas**: List of database schemas
- **Tables**: Spatial tables in selected schema
- **Columns**: Table columns and geometry information
### Step 3: Configure Import
Set import options:
- **Schema**: Source schema name
- **Table**: Source table name
- **Geometry Column**: Geometry column name (auto-detected)
- **ID Column**: Primary key column (optional)
- **Update Mode**: Replace, append, or upsert
### Step 4: Execute Import
Import can be:
- **Materialized**: Copy data to local database
- **Foreign Table**: Create foreign table (read-only, live connection)
## Import Modes
### Materialized Import
Full data copy to local database:
- **Pros**: Fast queries, no external dependency
- **Cons**: Data duplication, requires refresh for updates
- **Use Case**: Static datasets, analysis workflows
### Foreign Table Import
Live connection to external database:
- **Pros**: Always current, no data duplication
- **Cons**: Requires external connection, slower queries
- **Use Case**: Frequently updated data, large datasets
## Update Modes
### Replace
Replace all existing data:
- Delete existing data
- Import all source data
- Use for complete refresh
### Append
Add new data to existing:
- Keep existing data
- Add new records
- Use for incremental updates
### Upsert
Update existing, insert new:
- Requires key columns
- Updates matching records
- Inserts new records
- Use for incremental updates with changes
## Scheduled Imports
Set up recurring imports:
1. Configure import
2. Set schedule:
- **Daily**: Run at specified time
- **Weekly**: Run on specified day
- **Monthly**: Run on specified date
3. Configure update mode
4. Save schedule
## Refresh Import
Manually refresh existing imports:
1. Navigate to import history
2. Select import to refresh
3. Click "Refresh"
4. System re-imports data using original settings
## Connection Security
### Credential Storage
- Passwords encrypted in database
- Secure connection testing
- Access control per user
### Network Security
- Use SSL connections when available
- Configure firewall rules
- Use VPN for remote databases
## Example: Materialized Import
```json
{
"connection_id": 1,
"schema": "public",
"table": "parcels",
"geometry_column": "geom",
"id_column": "parcel_id",
"update_mode": "replace",
"materialize": true
}
```
## Example: Foreign Table Import
```json
{
"connection_id": 1,
"schema": "public",
"table": "parcels",
"geometry_column": "geom",
"materialize": false
}
```
## Troubleshooting
### Common Issues
**Connection failed**
- Verify host, port, database name
- Check network connectivity
- Verify credentials
- Check firewall rules
**Table not found**
- Verify schema name
- Check table exists
- Verify user permissions
**Geometry column not detected**
- Ensure PostGIS extension enabled
- Check geometry column type
- Verify spatial reference system
**Import timeout**
- Check table size
- Use materialized import for large tables
- Consider filtering data
## Related Documentation
- [Vector Import](vector.md)
- [Raster Import](raster.md)
- [ESRI Import](esri.md)
- [Configuration Guide](../configuration.md)

213
docs/import/raster.md Normal file
View File

@ -0,0 +1,213 @@
# Raster Data Import
Import raster data from files, URLs, or cloud storage.
## Overview
Raster data import supports multiple formats and import methods for grid-based spatial data.
## Supported Formats
### GeoTIFF
- **Extension**: `.tif`, `.tiff`, `.gtif`
- **Description**: Georeferenced TIFF format
- **Features**: Full support for multi-band rasters, overviews, compression
### Cloud Optimized GeoTIFF (COG)
- **Extension**: `.tif`, `.tiff`
- **Description**: Cloud-optimized GeoTIFF format
- **Features**: Optimized for cloud storage and streaming
- **Benefits**: Efficient access to large rasters
### Other Formats
- **JPEG2000**: `.jp2`, `.j2k`
- **PNG**: `.png` (with world file)
- **NetCDF**: `.nc`, `.nc4`
- **HDF**: `.hdf`, `.h5`
## Import Methods
### File Upload
Upload raster files directly:
1. Navigate to raster upload page
2. Select file or drag and drop
3. Add optional description
4. Configure import options
5. Click "Upload"
**File Size Limit**: Configurable (default: 100MB+)
### URL Import
Import from web-accessible URLs:
1. Navigate to URL import page
2. Enter raster URL
3. Configure import options
4. Optionally schedule import
5. Click "Import"
### S3 Bucket Import
Import from AWS S3 buckets:
1. Navigate to S3 import page
2. Configure AWS credentials
3. Select bucket and file
4. Configure import mode
5. Click "Import"
**Import Modes**:
- **Serve COG**: Register as remote COG (no download)
- **Download PostGIS**: Download and import to PostGIS
### GeoServer Import
Import from GeoServer WCS:
1. Navigate to GeoServer import page
2. Select workspace and layer
3. Configure import options
4. Click "Import"
## Import Process
### Step 1: File Validation
Raster file is validated:
- Format detection
- GDAL availability check
- File integrity verification
- Metadata extraction
### Step 2: Metadata Extraction
Metadata extracted using GDAL:
- Spatial reference system (SRID)
- Bounding box
- Pixel size
- Band count
- Data type
- NoData values
### Step 3: PostGIS Import
Raster imported into PostGIS using `raster2pgsql`:
```bash
raster2pgsql -s {srid} -t {tile_size} {file} {schema}.{table} | psql
```
**Options**:
- **Tile Size**: Default 256x256 pixels
- **Schema**: Default 'public'
- **Table Name**: Auto-generated or specified
- **SRID**: Detected from file or specified
### Step 4: Registration
Raster registered in system:
- Metadata stored in `aurora_raster_layers` table
- Layer name assigned
- Access permissions set
- Preview generation
## Configuration Options
### Tile Size
Configure raster tiling:
- **256x256**: Default, good for most cases
- **512x512**: Larger tiles, fewer database rows
- **128x128**: Smaller tiles, more database rows
### Import Mode
For S3/URL imports:
- **Serve COG**: Register remote COG, no local storage
- **Download PostGIS**: Download and import to PostGIS
### Compression
Configure raster compression:
- **None**: No compression
- **JPEG**: Lossy compression
- **LZW**: Lossless compression
- **Deflate**: Lossless compression
## Example: GeoTIFF Upload
```bash
# Via API
curl -X POST "https://example.com/raster_upload.php" \
-F "raster_file=@elevation.tif" \
-F "description=Digital elevation model" \
-F "tile_size=256x256"
```
## Example: S3 Import
```bash
# Via API
curl -X POST "https://example.com/raster_bucket_import_api.php" \
-d "url=s3://bucket/path/to/raster.tif" \
-d "mode=download_postgis" \
-d "aws_access_key_id=..." \
-d "aws_secret_access_key=..."
```
## Cloud Optimized GeoTIFF (COG)
COG format provides:
- **Efficient Streaming**: Access specific regions without full download
- **Cloud Storage**: Optimized for S3, Azure, GCS
- **Performance**: Fast access to large rasters
- **Cost Effective**: Reduced bandwidth usage
### Creating COGs
Use GDAL to create COG:
```bash
gdal_translate input.tif output.tif \
-of COG \
-co COMPRESS=LZW \
-co TILED=YES
```
## Troubleshooting
### Common Issues
**GDAL not available**
- Install GDAL: `apt-get install gdal-bin` (Ubuntu)
- Verify: `gdalinfo --version`
- Check PATH configuration
**Large file timeout**
- Increase PHP execution time
- Use background import
- Consider chunked upload
**SRID not detected**
- Check raster metadata
- Specify SRID manually
- Verify projection information
**Memory issues**
- Increase PHP memory limit
- Use tile-based processing
- Consider resampling large rasters
## Related Documentation
- [Vector Import](vector.md)
- [PostGIS Import](postgis.md)
- [ESRI Import](esri.md)
- [Raster Analysis Tools](../analysis-tools/raster-histogram.md)

210
docs/import/vector.md Normal file
View File

@ -0,0 +1,210 @@
# Vector Data Import
Import vector spatial data from files or URLs.
## Overview
Vector data import supports multiple file formats and import methods for point, line, and polygon data.
## Supported Formats
### GeoJSON
- **Extension**: `.geojson`, `.json`
- **Description**: GeoJSON Feature Collections
- **Features**: Full support for all geometry types and properties
### Shapefile
- **Extension**: `.shp`, `.zip` (containing shapefile components)
- **Description**: ESRI Shapefile format
- **Requirements**: ZIP archive must contain `.shp`, `.shx`, `.dbf` files
- **Note**: `.prj` file recommended for proper coordinate system
### KML
- **Extension**: `.kml`
- **Description**: Google Earth KML format
- **Features**: Supports placemarks, paths, and polygons
### CSV
- **Extension**: `.csv`
- **Description**: Comma-separated values with coordinates
- **Requirements**: Must have coordinate columns (lat/lon or x/y)
- **Features**: Automatic column detection
### GeoPackage
- **Extension**: `.gpkg`
- **Description**: OGC GeoPackage format
- **Features**: Supports multiple layers and raster data
### DXF
- **Extension**: `.dxf`
- **Description**: AutoCAD DXF format
- **Features**: Supports CAD drawings and annotations
### PBF
- **Extension**: `.pbf`
- **Description**: OpenStreetMap Protocol Buffer Format
- **Requirements**: `osm2pgsql` must be installed
- **Features**: High-performance OSM data import
## Import Methods
### File Upload
Upload files directly from your computer:
1. Navigate to upload page
2. Select file or drag and drop
3. Add optional description
4. Select target SRS (default: EPSG:4326)
5. Click "Upload"
**File Size Limit**: 50MB (configurable)
### URL Import
Import from web-accessible URLs:
1. Navigate to URL import page
2. Enter data URL
3. Configure import options
4. Optionally schedule import
5. Click "Import"
**Supported URL Types**:
- Direct file URLs
- GeoJSON service endpoints
- ArcGIS REST services
- WFS endpoints
### Scheduled Import
Set up recurring imports from URLs:
1. Configure URL import
2. Set schedule (daily, weekly, monthly)
3. Configure update mode (replace, append, upsert)
4. Save schedule
## Import Process
### Step 1: File Detection
The system automatically detects file type based on:
- File extension
- File content (for ambiguous extensions)
- ZIP archive contents
### Step 2: Metadata Extraction
Metadata is extracted including:
- Feature count
- Geometry types
- Bounding box
- Coordinate system
- Attribute fields
### Step 3: Data Processing
Data is processed based on file type:
- **GeoJSON**: Direct parsing and import
- **Shapefile**: Extraction from ZIP, coordinate transformation
- **CSV**: Coordinate column detection, geometry creation
- **KML**: Placemark and geometry extraction
- **PBF**: OSM data processing via osm2pgsql
### Step 4: PostGIS Import
Processed data is imported into PostGIS:
- Table creation: `spatial_data_{file_id}`
- Geometry conversion to PostGIS format
- Spatial index creation
- Attribute storage in JSONB
### Step 5: Registration
Dataset is registered in the system:
- Metadata stored in `spatial_files` table
- Access permissions set
- Version record created
## Configuration Options
### Target SRS
Select target spatial reference system:
- **EPSG:4326** (WGS84): Default, global coverage
- **EPSG:3857** (Web Mercator): Web mapping standard
- **Other**: Any valid EPSG code
### Update Mode
For scheduled imports:
- **Replace**: Replace all existing data
- **Append**: Add new data to existing
- **Upsert**: Update existing, insert new (requires key columns)
### Filters
Apply filters during import:
- **Spatial Filter**: Bounding box or geometry
- **Attribute Filter**: SQL WHERE clause
- **Feature Limit**: Maximum number of features
## Example: GeoJSON Import
```bash
# Via API
curl -X POST "https://example.com/upload.php" \
-F "spatial_file=@data.geojson" \
-F "description=Sample dataset" \
-F "targetSRS=EPSG:4326"
```
## Example: URL Import
```bash
# Via API
curl -X POST "https://example.com/import_url.php" \
-d "url=https://example.com/data.geojson" \
-d "description=Imported from URL" \
-d "targetSRS=EPSG:4326"
```
## Troubleshooting
### Common Issues
**File too large**
- Check file size limit configuration
- Consider splitting large files
- Use URL import for very large files
**Invalid geometry**
- Verify coordinate system
- Check for invalid geometries in source
- Use geometry validation tools
**Missing coordinate system**
- Ensure `.prj` file included for shapefiles
- Specify target SRS manually
- Check source data metadata
**Import timeout**
- Increase PHP execution time limit
- Use background import for large files
- Consider chunked upload for very large files
## Related Documentation
- [Raster Import](raster.md)
- [PostGIS Import](postgis.md)
- [ESRI Import](esri.md)
- [Installation Guide](../installation.md)

129
docs/index.md Normal file
View File

@ -0,0 +1,129 @@
# Aurora GIS Documentation
Welcome to the Aurora GIS documentation. Aurora GIS is a comprehensive PHP-based web application for managing, analyzing, and visualizing geospatial data using PostgreSQL and PostGIS.
## Overview
Aurora GIS provides a complete solution for:
- **Data Management**: Upload, import, and manage spatial datasets in multiple formats
- **Spatial Analysis**: Perform advanced geospatial analysis including hotspot detection, outlier analysis, KDE, clustering, and more
- **Data Visualization**: Interactive maps, dashboards, and charts
- **Background Processing**: Asynchronous job processing for long-running operations
- **API Access**: RESTful API for programmatic access to datasets and analysis tools
## Quick Start
1. [Installation Guide](installation.md) - Set up Aurora GIS on your server
2. [Configuration](configuration.md) - Configure database, authentication, and system settings
3. [Architecture Overview](architecture.md) - Understand the system architecture
## Documentation Sections
```{toctree}
:maxdepth: 2
:caption: Getting Started
installation
configuration
architecture
```
```{toctree}
:maxdepth: 2
:caption: Data Import
import/index
```
```{toctree}
:maxdepth: 2
:caption: API Reference
api/index
```
```{toctree}
:maxdepth: 2
:caption: Workers
workers/index
```
```{toctree}
:maxdepth: 2
:caption: Analysis Tools
analysis-tools/index
```
```{toctree}
:maxdepth: 2
:caption: User Interface
ui/index
```
```{toctree}
:maxdepth: 2
:caption: Content & Applications
dashboard
web-apps
accordion
```
```{toctree}
:maxdepth: 1
:caption: Additional Resources
changelog
```
## Key Features
### Data Import & Management
- **Vector Formats**: GeoJSON, Shapefiles, KML, CSV, GeoPackage, DXF, PBF
- **Raster Formats**: GeoTIFF, COG, JPEG2000, NetCDF, HDF
- **URL-based imports** with scheduling
- **PostGIS remote** database connections
- **ESRI/ArcGIS** service imports
- **Overture Maps** integration
- **S3 bucket** imports
### Spatial Analysis
- **Hot Spot Analysis**: Identify statistically significant clusters using Getis-Ord Gi* statistics
- **Outlier Detection**: Find statistical outliers using z-score or MAD methods
- **KDE (Kernel Density Estimation)**: Generate density surfaces from point data
- **Clustering**: Group features based on spatial proximity
- **Zonal Statistics**: Calculate statistics for raster data within polygon zones
- **Proximity Analysis**: Buffer, nearest neighbor, and distance calculations
- **Overlay Operations**: Intersect, union, erase, and join operations
### Background Processing
- Asynchronous job processing for long-running operations
- Worker-based architecture for scalable processing
- Job status tracking and monitoring
- Support for scheduled imports and analysis
### Visualization
- Interactive Leaflet.js maps
- **Dashboard Builder**: Create custom dashboards with multiple widgets
- **Web Apps**: Build multi-page applications with custom layouts
- **Accordion Stories**: Create narrative content with expandable sections
- Chart generation from spatial data
- Layer styling and legend management
- Popup configuration for feature details
## System Requirements
- PHP 7.4 or higher
- PostgreSQL 12+ with PostGIS extension
- Web server (Apache/Nginx)
- PHP extensions: PDO, PDO_PGSQL, JSON, ZIP, GDAL (for raster operations)
- Optional: DuckDB (for Overture Maps), QGIS Server (for QGIS projects)
## Support
For issues, questions, or contributions, please refer to the relevant documentation sections or contact the development team.

273
docs/installation.md Normal file
View File

@ -0,0 +1,273 @@
# Installation Guide
This guide will walk you through installing and setting up Aurora GIS on your server.
## System Requirements
### Server Requirements
- **PHP**: 7.4 or higher
- **PostgreSQL**: 12 or higher
- **PostGIS**: 3.0 or higher
- **Web Server**: Apache 2.4+ or Nginx 1.18+
- **Operating System**: Linux (Ubuntu/Debian recommended), macOS, or Windows
### PHP Extensions
Required PHP extensions:
- `pdo`
- `pdo_pgsql`
- `json`
- `zip`
- `gd` (for image processing)
- `curl` (for URL imports)
Optional but recommended:
- `gdal` (for advanced raster operations)
- `mbstring` (for string handling)
### Optional Dependencies
- **DuckDB**: For Overture Maps parquet processing
- **QGIS Server**: For QGIS project rendering
- **GeoServer**: For advanced WMS/WFS services
- **pg_tileserv**: For vector tile generation
## Installation Steps
### 1. Install PostgreSQL and PostGIS
#### Ubuntu/Debian
```bash
# Install PostgreSQL and PostGIS
sudo apt-get update
sudo apt-get install postgresql postgresql-contrib postgis
# Enable PostGIS extension
sudo -u postgres psql -c "CREATE EXTENSION IF NOT EXISTS postgis;"
```
#### macOS (Homebrew)
```bash
brew install postgresql postgis
brew services start postgresql
```
#### Windows
Download and install from:
- PostgreSQL: https://www.postgresql.org/download/windows/
- PostGIS: https://postgis.net/windows_downloads/
### 2. Create Database
```bash
# Create database user
sudo -u postgres createuser -P aurora_user
# Create database
sudo -u postgres createdb -O aurora_user aurora_gis
# Enable PostGIS extension
sudo -u postgres psql -d aurora_gis -c "CREATE EXTENSION IF NOT EXISTS postgis;"
```
### 3. Install Application Files
```bash
# Clone or download the application
cd /var/www/html # or your web server directory
git clone <repository-url> aurora-gis
cd aurora-gis
# Set proper permissions
sudo chown -R www-data:www-data .
sudo chmod -R 755 .
sudo chmod -R 775 uploads/
```
### 4. Configure Web Server
#### Apache Configuration
Create or edit `/etc/apache2/sites-available/aurora-gis.conf`:
```apache
<VirtualHost *:80>
ServerName aurora-gis.local
DocumentRoot /var/www/html/aurora-gis
<Directory /var/www/html/aurora-gis>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/aurora-gis_error.log
CustomLog ${APACHE_LOG_DIR}/aurora-gis_access.log combined
</VirtualHost>
```
Enable the site:
```bash
sudo a2ensite aurora-gis
sudo systemctl reload apache2
```
#### Nginx Configuration
Create `/etc/nginx/sites-available/aurora-gis`:
```nginx
server {
listen 80;
server_name aurora-gis.local;
root /var/www/html/aurora-gis;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
```
Enable the site:
```bash
sudo ln -s /etc/nginx/sites-available/aurora-gis /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
```
### 5. Initialize Application
1. Open your web browser and navigate to:
```
http://your-server/initialize.php
```
2. Fill in the initialization form:
- **Database Host**: `localhost` (or your PostgreSQL host)
- **Database Name**: `aurora_gis`
- **Database User**: `aurora_user`
- **Database Password**: Your database password
- **Database Port**: `5432` (default)
- **Admin Email**: Your admin email address
- **Admin Password**: Choose a secure password
3. Click "Initialize" to:
- Create the `config/const.php` file with database credentials
- Create all required database tables
- Create the initial admin user
- Set up required directories
### 6. Verify Installation
After initialization, you should be able to:
1. Log in with your admin credentials at the login page
2. Access the home page and see the dashboard
3. Upload a test dataset to verify functionality
## Post-Installation Setup
### Create Required Directories
The initialization script creates most directories, but you may need to create additional ones:
```bash
mkdir -p uploads/geoserver_documents
mkdir -p uploads/tabular
mkdir -p uploads/raster
mkdir -p uploads/qgis
mkdir -p logs
chmod -R 775 uploads/
chmod -R 775 logs/
```
### Configure Background Workers
Background workers process long-running jobs. Set them up as systemd services:
```bash
# Copy systemd service files
sudo cp systemd/*.service /etc/systemd/system/
# Enable and start workers
sudo systemctl enable hotspot_worker.service
sudo systemctl start hotspot_worker.service
# Repeat for other workers as needed
```
See the [Workers Documentation](workers/index.md) for details on each worker.
### Optional: Install DuckDB (for Overture Maps)
```bash
# Ubuntu/Debian
sudo snap install duckdb
# Or download binary from https://duckdb.org/docs/installation/
```
### Optional: Install QGIS Server
```bash
# Ubuntu/Debian
sudo apt-get install qgis-server qgis-server-plugin
# Configure QGIS Server
sudo systemctl enable qgis-server
sudo systemctl start qgis-server
```
## Troubleshooting
### Database Connection Issues
- Verify PostgreSQL is running: `sudo systemctl status postgresql`
- Check database credentials in `config/const.php`
- Ensure PostGIS extension is enabled: `psql -d aurora_gis -c "\dx"`
- Check PostgreSQL logs: `/var/log/postgresql/postgresql-*.log`
### Permission Issues
- Ensure web server user (www-data/apache) has read/write access to:
- `uploads/` directory
- `logs/` directory
- `config/const.php` (read-only after initialization)
### PHP Errors
- Check PHP error log: `/var/log/php-errors.log` or `php.ini` location
- Verify all required PHP extensions are installed: `php -m`
- Check PHP version: `php -v` (should be 7.4+)
### PostGIS Issues
- Verify PostGIS is installed: `psql -d aurora_gis -c "SELECT PostGIS_version();"`
- Check spatial reference systems: `psql -d aurora_gis -c "SELECT COUNT(*) FROM spatial_ref_sys;"`
## Next Steps
After successful installation:
1. Review [Configuration](configuration.md) for system settings
2. Read [Architecture Overview](architecture.md) to understand the system
3. Explore [API Documentation](api/index.md) for programmatic access
4. Check [Analysis Tools](analysis-tools/index.md) for available features
## Related Documentation
- [Configuration Guide](configuration.md)
- [Architecture Overview](architecture.md)
- [Workers Documentation](workers/index.md)

74
docs/ui/analysis-panel.md Normal file
View File

@ -0,0 +1,74 @@
# Analysis Panel
The analysis panel provides integrated access to spatial analysis tools within the map viewer.
## Overview
The analysis panel is accessible from the map viewer and provides quick access to analysis tools without leaving the map interface.
## Available Tools
### Proximity Analysis
- **Buffer**: Create buffer zones
- **Nearest**: Find nearest neighbors
- **Proximity**: Distance calculations
- **Center & Dispersion**: Central tendency analysis
- **Outliers**: Statistical outlier detection
### Overlay Operations
- **Intersect**: Find overlapping features
- **Overlay Layers**: Combine multiple layers
- **Join**: Spatial joins
- **Join Features (Aggregated)**: Aggregated joins
- **Summarize Within**: Zonal statistics
- **Summarize Nearby**: Proximity-based summaries
- **Erase**: Remove overlapping features
### Spatial Analysis
- **Clustering**: Group features
- **Heatmap**: Density visualization
- **Hot Spots**: Getis-Ord Gi* analysis
- **Outliers**: Statistical outliers
### Raster Tools
- **Identify Pixel Value**: Query pixel values
- **Zonal Statistics**: Calculate zone statistics
- **Raster Histogram**: Value distribution
- **Raster Summary**: Summary statistics
- **Raster Profile**: Extract profiles
- **Raster Conversion**: Format conversion
- **Raster Comparison**: Compare rasters
## Tool Execution
### Quick Analysis
- Select tool from panel
- Configure parameters
- Run analysis
- View results on map
### Background Processing
- Long-running analyses run as background jobs
- Job status displayed
- Results appear when complete
- Notification on completion
## Results Display
- Results added as new layers
- Automatic styling applied
- Legend generation
- Popup configuration
## Related Documentation
- [Analysis Tools](../analysis-tools/index.md)
- [Map Viewer](map-viewer.md)
- [Workers](../workers/index.md)

View File

@ -0,0 +1,107 @@
# Dashboard Builder
The dashboard builder allows users to create custom data dashboards with multiple widgets.
## Overview
The dashboard builder (`dashboard_builder.php`) provides a drag-and-drop interface for creating interactive dashboards.
## Widgets
### Map Widget
- Interactive map display
- Multiple dataset layers
- Styling and configuration
- Basemap selection
### Chart Widget
- Bar charts
- Line charts
- Pie charts
- Scatter plots
- Time series charts
### Table Widget
- Data table display
- Sorting and filtering
- Pagination
- Export options
### Counter Widget
- Count of features
- Sum of numeric values
- Average of numeric values
- Custom calculations
### Filter Widget
- Dataset filter controls
- Property filters
- Spatial filters
- Filter synchronization
### Vector Analysis Widget
- Tabular statistics for vector datasets
- Summary information
- Aggregated values
### Raster Analysis Widget
- Tabular statistics for raster datasets
- Summary information
- Pixel value statistics
### Hot Spot Summary Widget
- Hot spot analysis summary
- Statistics display
- Class distribution
### Outlier Summary Widget
- Outlier analysis summary
- Statistics display
- Outlier count
## Dashboard Features
### Layout
- Drag-and-drop widget placement
- Resizable widgets
- Grid-based layout
- Responsive design
### Configuration
- Widget-specific settings
- Data source selection
- Styling options
- Update intervals
### Sharing
- Public dashboard URLs
- Embed codes
- Export configurations
- Permissions management
## Use Cases
- Data monitoring dashboards
- Analysis result displays
- Public data portals
- Executive summaries
- Operational dashboards
## Related Documentation
- [Dataset Viewer](dataset-viewer.md)
- [Map Viewer](map-viewer.md)
- [Analysis Panel](analysis-panel.md)

77
docs/ui/dataset-tools.md Normal file
View File

@ -0,0 +1,77 @@
# Dataset Tools
Dataset tools provide batch processing and advanced operations for datasets.
## Overview
Dataset tools (`batch_tools.php`, `batch_tools_advanced.php`, `batch_tools_raster.php`) provide interfaces for batch operations and advanced analysis.
## Core Analysis Tools
### Batch Operations
- Process multiple datasets
- Apply same operation to multiple datasets
- Batch import/export
- Bulk updates
### Analysis Tools
- Hot spot analysis
- Outlier detection
- Buffer analysis
- Join operations
- Dissolve operations
- Clip operations
## Advanced Analysis Tools
### Advanced Operations
- Complex spatial queries
- Multi-step analysis workflows
- Custom SQL operations
- Advanced filtering
## Raster Analysis Tools
### Raster Operations
- Zonal statistics
- Raster conversion
- Raster comparison
- Raster algebra
- Raster resampling
## Live Analysis Suite
### Real-time Analysis
- Live hot spot analysis
- Live outlier detection
- Live KDE
- Real-time filtering
- Dynamic updates
## Tool Configuration
### Parameters
- Input dataset selection
- Output dataset configuration
- Analysis parameters
- Output format options
### Scheduling
- Schedule batch operations
- Background processing
- Job queue management
- Progress monitoring
## Related Documentation
- [Analysis Tools](../analysis-tools/index.md)
- [Workers](../workers/index.md)
- [API Documentation](../api/index.md)

85
docs/ui/dataset-viewer.md Normal file
View File

@ -0,0 +1,85 @@
# Dataset Viewer
The dataset viewer provides a comprehensive interface for viewing and managing spatial datasets.
## Overview
The dataset viewer (`dataset.php`) displays datasets with multiple tabs for different views and operations.
## Features
### Data Tab
- Dataset metadata and information
- Feature listing with pagination
- Property filtering
- Export options
- Table view of features
### Map Tab
- Interactive map display
- Layer styling and configuration
- Popup configuration
- Legend management
- Basemap selection
### Analysis Tab
- Quick analysis tools
- Statistics display
- Chart generation
- Summary information
## Layer Controls
### Visibility
- Toggle layer visibility on/off
- Control layer opacity
- Layer ordering (z-index)
### Styling
- Point styling (color, size, symbol)
- Line styling (color, width, style)
- Polygon styling (fill, stroke, opacity)
- Graduated colors based on attributes
- Categorical colors
### Filters
- Property-based filtering
- Spatial filtering (bounding box)
- Save and load filter configurations
- Multiple filter conditions
### Popups
- Configure popup content
- Select properties to display
- Custom HTML formatting
- Link to detail pages
## Legend Management
- Automatic legend generation
- Custom legend configuration
- Graduated color legends
- Categorical legends
- Legend export
## Export Options
- GeoJSON export
- Shapefile export
- CSV export
- KML export
- Filtered export
## Related Documentation
- [Map Viewer](map-viewer.md)
- [Analysis Panel](analysis-panel.md)
- [Dataset Tools](dataset-tools.md)

41
docs/ui/index.md Normal file
View File

@ -0,0 +1,41 @@
# User Interface Components
Aurora GIS provides a comprehensive web-based interface for managing, analyzing, and visualizing spatial data.
## Overview
The user interface is built with:
- **Bootstrap 5**: Responsive UI framework
- **Leaflet.js**: Interactive mapping
- **OpenLayers**: Advanced mapping capabilities
- **Chart.js / Plotly**: Data visualization
- **Modern JavaScript**: ES6+ features
## Main Components
```{toctree}
:maxdepth: 2
dataset-viewer
map-viewer
dashboard-builder
analysis-panel
dataset-tools
```
## Key Features
- **Responsive Design**: Works on desktop, tablet, and mobile
- **Dark Mode**: Optional dark theme support
- **Interactive Maps**: Pan, zoom, and query features
- **Real-time Updates**: Live data updates and filtering
- **Customizable Dashboards**: Build custom data dashboards
- **Analysis Tools**: Integrated analysis panel
## Related Documentation
- [Analysis Tools](../analysis-tools/index.md)
- [API Documentation](../api/index.md)
- [Architecture Overview](../architecture.md)

88
docs/ui/map-viewer.md Normal file
View File

@ -0,0 +1,88 @@
# Map Viewer
The map viewer provides an interactive mapping interface for visualizing spatial data.
## Overview
The map viewer (`view_map.php`) displays multiple datasets on an interactive map with full analysis capabilities.
## Features
### Map Display
- Multiple basemap options
- Layer management
- Zoom and pan controls
- Feature identification
- Coordinate display
- Scale bar
### Layer Management
- Add/remove layers
- Layer visibility toggle
- Layer opacity control
- Layer ordering
- Layer grouping
### Styling
- Point styling
- Line styling
- Polygon styling
- Graduated colors
- Categorical colors
- Custom styles
### Analysis Tools
Integrated analysis panel with:
- Hot spot analysis
- Outlier detection
- KDE (Kernel Density Estimation)
- Clustering
- Buffer analysis
- Nearest neighbor
- Overlay operations
- Raster tools
## Basemaps
Available basemap options:
- OpenStreetMap
- CartoDB Positron
- CartoDB Dark Matter
- CartoDB Voyager
- ESRI World Imagery
- Custom WMS layers
## Interaction
### Feature Identification
- Click features to view details
- Popup display with attributes
- Link to feature detail pages
### Drawing Tools
- Draw polygons for clipping
- Draw lines for profiles
- Draw points for analysis
- Measure distances and areas
### Spatial Queries
- Query by location
- Query by attributes
- Spatial filters
- Buffer queries
## Related Documentation
- [Dataset Viewer](dataset-viewer.md)
- [Analysis Panel](analysis-panel.md)
- [Dashboard Builder](dashboard-builder.md)

248
docs/web-apps.md Normal file
View File

@ -0,0 +1,248 @@
# Web Apps
Create multi-page web applications with custom layouts and content.
## Overview
Web Apps allow you to build custom multi-page applications with unique URLs (slugs). Each app can have multiple pages with different content types including maps, datasets, tables, and charts.
## Creating Web Apps
### Access
- Navigate to Web Apps management (`web_apps.php`)
- Only admins can create and manage web apps
- Apps are accessible via unique slug URLs
### App Structure
Web apps consist of:
- **App Configuration**: Name, slug, description, active status
- **Pages**: Multiple pages with different content
- **Widgets**: Content widgets on each page
- **Navigation**: Page navigation system
## App Configuration
### Basic Settings
- **Name**: Display name of the app
- **Slug**: URL-friendly identifier (e.g., `my-app`)
- **Description**: App description
- **Active Status**: Enable/disable app
### Access
Apps are accessed via:
```
/app.php?slug={app_slug}
```
Or with specific page:
```
/app.php?slug={app_slug}&page={page_id}
```
## Pages
### Page Types
Each page can contain different content types:
- **Map**: Interactive map display
- **Dataset**: Dataset viewer
- **Table**: Data table
- **Chart**: Data visualization
### Page Configuration
- **Title**: Page title
- **ID**: Unique page identifier
- **Content Type**: Map, dataset, table, or chart
- **Widgets**: Content widgets on the page
## Widgets
### Map Widget
Interactive map with layers.
**Configuration**:
- Dataset selection
- Basemap selection
- Layer styling
- Initial extent
### Dataset Widget
Dataset viewer widget.
**Configuration**:
- Dataset selection
- View mode (data/map/chart)
- Display options
### Table Widget
Data table display.
**Configuration**:
- Dataset selection
- Columns to display
- Sorting and filtering
- Pagination
### Chart Widget
Data visualization.
**Configuration**:
- Dataset selection
- Chart type
- X/Y axis configuration
- Styling options
## Building Web Apps
### Step 1: Create App
1. Navigate to Web Apps
2. Click "New Web App"
3. Enter name and slug
4. Set description
5. Save app
### Step 2: Add Pages
1. Open app editor
2. Add new page
3. Configure page settings
4. Select content type
5. Save page
### Step 3: Configure Widgets
1. Select page
2. Add widgets
3. Configure widget settings
4. Link to datasets
5. Save configuration
### Step 4: Publish
1. Set app to active
2. Test app via slug URL
3. Share app URL
4. Monitor usage
## Use Cases
### Public Applications
- Public data portals
- Community applications
- Information systems
- Data exploration tools
### Internal Tools
- Internal dashboards
- Workflow applications
- Data entry systems
- Reporting tools
### Custom Solutions
- Client-specific applications
- Project-specific tools
- Specialized interfaces
- Branded applications
## App Management
### Editing
- Edit app configuration
- Modify pages
- Update widgets
- Change permissions
### Publishing
- Activate/deactivate apps
- Set public/private access
- Configure permissions
- Monitor usage
### Maintenance
- Update content
- Refresh data
- Modify layouts
- Add new pages
## Permissions
### Access Control
- **Public**: Accessible without authentication
- **Private**: Requires authentication
- **Group-based**: Access by user groups
- **User-specific**: Individual user access
### Editing Permissions
- Only admins can create/edit apps
- App creators can edit their apps
- Permissions can be delegated
## Example Web App
A typical web app might include:
1. **Home Page**: Overview with map and key statistics
2. **Data Page**: Dataset browser and viewer
3. **Analysis Page**: Analysis tools and results
4. **About Page**: Information and documentation
## API Access
Web apps can be accessed programmatically:
```bash
# Access app
GET /app.php?slug={app_slug}
# Access specific page
GET /app.php?slug={app_slug}&page={page_id}
```
## Best Practices
### Design
- Keep navigation simple
- Use consistent layouts
- Optimize for mobile
- Test across browsers
### Content
- Organize content logically
- Use clear page titles
- Provide navigation aids
- Include help text
### Performance
- Optimize widget loading
- Use efficient queries
- Cache when appropriate
- Monitor performance
## Related Documentation
- [Dashboard](dashboard.md)
- [Accordion Stories](accordion.md)
- [UI Components](../ui/index.md)

87
docs/workers/clip.md Normal file
View File

@ -0,0 +1,87 @@
# Clip Worker
Processes clip operations to extract features within a boundary.
## Overview
The clip worker extracts features from a dataset that intersect with a clipping boundary geometry.
## Job Type
`clip`
## Input Parameters
```json
{
"dataset_id": 123,
"clip_geometry": {
"type": "Polygon",
"coordinates": [ ... ]
},
"output_dataset_id": 124
}
```
### Parameters
- `dataset_id` (required): Source dataset ID
- `clip_geometry` (required): GeoJSON geometry for clipping boundary
- `output_dataset_id` (required): Output dataset ID
## Output
Creates a new dataset with clipped features:
- Features that intersect the clipping boundary
- Geometry clipped to boundary
- Original attributes preserved
## Algorithm
The worker uses PostGIS `ST_Intersection` to:
1. Transform clipping geometry to dataset SRID
2. Find features that intersect the boundary
3. Clip geometries to boundary
4. Store results in output table
## Example
```bash
# Enqueue a clip job via API
curl -X POST "https://example.com/api/datasets_clip_run.php" \
-H "Content-Type: application/json" \
-d '{
"dataset_id": 123,
"clip_geometry": {
"type": "Polygon",
"coordinates": [[[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]]
},
"output_dataset_id": 124
}'
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `clip` jobs
2. Validates input parameters
3. Executes PostGIS clip queries
4. Creates output dataset
5. Marks job as completed
## Performance Considerations
- Processing time depends on dataset size and boundary complexity
- Complex clipping boundaries may slow processing
- Spatial indexes improve intersection performance
- Consider simplifying geometries before clipping
## Related Documentation
- [Clip Analysis Tool](../analysis-tools/clip.md)
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

View File

@ -0,0 +1,79 @@
# Create View Worker
Processes view creation jobs to create database views from queries.
## Overview
The create view worker creates database views based on SQL queries, allowing dynamic datasets that update when source data changes.
## Job Type
`create_view`
## Input Parameters
```json
{
"source_dataset_id": 123,
"query": "SELECT * FROM spatial_data_123 WHERE properties->>'category' = 'A'",
"output_dataset_id": 124
}
```
### Parameters
- `source_dataset_id` (required): Source dataset ID
- `query` (required): SQL query to create view from
- `output_dataset_id` (required): Output dataset ID
## Output
Creates a new dataset backed by a database view:
- View created in `spatial_data` schema
- Dataset metadata in `spatial_files` table
- View updates automatically when source data changes
## Use Cases
- Filtered datasets
- Joined datasets
- Aggregated datasets
- Computed datasets
## Example
```bash
# Enqueue a create view job via API
curl -X POST "https://example.com/api/create_view_run.php" \
-H "Content-Type: application/json" \
-d '{
"source_dataset_id": 123,
"query": "SELECT * FROM spatial_data_123 WHERE properties->>'\''category'\'' = '\''A'\''",
"output_dataset_id": 124
}'
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `create_view` jobs
2. Validates input parameters
3. Validates SQL query
4. Creates database view
5. Creates dataset metadata
6. Marks job as completed
## Performance Considerations
- Views don't store data, so creation is fast
- Query performance depends on underlying data
- Complex queries may slow view access
- Consider materialized views for expensive queries
## Related Documentation
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

109
docs/workers/dissolve.md Normal file
View File

@ -0,0 +1,109 @@
# Dissolve Worker
Processes dissolve operations to merge features based on attribute values.
## Overview
The dissolve worker merges adjacent or overlapping features that share the same attribute value, optionally aggregating numeric fields.
## Job Type
`dissolve`
## Input Parameters
```json
{
"source_dataset_id": 123,
"output_dataset_id": 124,
"dissolve_mode": "field",
"dissolve_field": "category",
"aggregation_fields": {
"population": "sum",
"area": "sum"
}
}
```
### Parameters
- `source_dataset_id` (required): Source dataset ID
- `output_dataset_id` (required): Output dataset ID
- `dissolve_mode` (optional): "all", "field", or "custom" (default: "field")
- `dissolve_field` (required if mode="field"): Field to dissolve on
- `custom_field` (required if mode="custom"): Field for custom grouping
- `custom_groups` (required if mode="custom"): Array of group definitions
- `aggregation_fields` (optional): Object mapping field names to aggregation functions (sum, avg, min, max, count)
## Output
Creates a new dataset with dissolved features:
- Merged geometries for each group
- Aggregated attribute values
- Group identifiers
## Dissolve Modes
### All Features
Merge all features into a single feature. No grouping field required.
### By Field
Merge features that share the same value in the specified field.
### Custom Groups
Merge features based on custom group definitions. Allows complex grouping logic.
## Aggregation Functions
- `sum`: Sum of numeric values
- `avg`: Average of numeric values
- `min`: Minimum value
- `max`: Maximum value
- `count`: Count of features
## Example
```bash
# Enqueue a dissolve job via API
curl -X POST "https://example.com/api/run_dissolve.php" \
-H "Content-Type: application/json" \
-d '{
"source_dataset_id": 123,
"output_dataset_id": 124,
"dissolve_mode": "field",
"dissolve_field": "category",
"aggregation_fields": {
"population": "sum",
"area": "sum"
}
}'
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `dissolve` jobs
2. Validates input parameters
3. Executes PostGIS dissolve queries
4. Applies aggregations
5. Creates output dataset
6. Marks job as completed
## Performance Considerations
- Processing time depends on dataset size and number of groups
- Complex geometries may slow processing
- Aggregation operations add processing time
- Consider simplifying geometries before dissolving
## Related Documentation
- [Dissolve Analysis Tool](../analysis-tools/dissolve.md)
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

View File

@ -0,0 +1,78 @@
# Erase Analysis Worker
Processes erase operations to remove features using another dataset.
## Overview
The erase analysis worker removes portions of features from an input dataset that overlap with features in an erase dataset.
## Job Type
`erase_analysis`
## Input Parameters
```json
{
"input_dataset_id": 123,
"erase_dataset_id": 124
}
```
### Parameters
- `input_dataset_id` (required): Input dataset ID
- `erase_dataset_id` (required): Erase dataset ID
## Output
Creates a new dataset with erased features:
- Features with erased portions removed
- Remaining geometry after erase operation
- Original attributes preserved
## Algorithm
The worker uses PostGIS `ST_Difference` to:
1. Find features that intersect the erase dataset
2. Calculate difference (input - erase)
3. Remove empty geometries
4. Store results in output table
## Example
```bash
# Enqueue an erase analysis job via API
curl -X POST "https://example.com/api/analysis_erase_run.php" \
-H "Content-Type: application/json" \
-d '{
"input_dataset_id": 123,
"erase_dataset_id": 124
}'
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `erase_analysis` jobs
2. Validates input parameters
3. Executes PostGIS erase operations
4. Creates output dataset
5. Marks job as completed
## Performance Considerations
- Processing time depends on dataset sizes and overlap
- Complex geometries may slow processing
- Spatial indexes improve intersection performance
- Consider simplifying geometries before erasing
## Related Documentation
- [Erase Analysis Tool](../analysis-tools/erase.md)
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

View File

@ -0,0 +1,115 @@
# Hot Spot Analysis Worker
Processes hot spot analysis jobs using Getis-Ord Gi* statistics.
## Overview
The hot spot analysis worker identifies statistically significant clusters of high and low values in spatial data using the Getis-Ord Gi* statistic.
## Job Type
`hotspot_analysis`
## Input Parameters
```json
{
"dataset_id": 123,
"value_field": "population",
"neighbor_type": "distance",
"distance": 1000,
"output_mode": "static"
}
```
### Parameters
- `dataset_id` (required): Source dataset ID
- `value_field` (required): Numeric field to analyze
- `neighbor_type` (optional): "distance" or "knn" (default: "distance")
- `distance` (required if neighbor_type="distance"): Distance threshold in dataset units
- `k_neighbors` (required if neighbor_type="knn"): Number of nearest neighbors
- `output_mode` (optional): "static", "view", or "materialized_view" (default: "static")
## Output
Creates a new dataset with hot spot analysis results:
- **Gi* Z-Score**: Standardized z-score indicating hot/cold spots
- **P-Value**: Statistical significance
- **Hot Spot Class**: Categorized classes (99% hot, 95% hot, 90% hot, not significant, 90% cold, 95% cold, 99% cold)
## Output Modes
### Static Table (default)
Results stored in a permanent table `spatial_data_{output_id}`. Best for:
- Final results that won't change
- Maximum query performance
- Historical snapshots
### View
Results stored as a database view. Best for:
- Results that should update when source data changes
- Real-time analysis
- Reduced storage requirements
### Materialized View
Results stored as a materialized view. Best for:
- Large datasets requiring periodic refresh
- Balance between performance and freshness
- Scheduled updates
## Algorithm
The worker uses PostGIS functions to:
1. Calculate spatial weights matrix based on neighbor type
2. Compute Getis-Ord Gi* statistic for each feature
3. Calculate z-scores and p-values
4. Categorize results into hot spot classes
5. Store results in output table/view
## Example
```bash
# Enqueue a hot spot analysis job via API
curl -X POST "https://example.com/api/analysis_hotspot_run.php" \
-H "Content-Type: application/json" \
-d '{
"dataset_id": 123,
"value_field": "population",
"neighbor_type": "distance",
"distance": 1000
}'
# Worker processes the job automatically
# Check status via API
curl "https://example.com/api/job_status.php?job_id=456"
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `hotspot_analysis` jobs
2. Validates input parameters
3. Executes PostGIS analysis queries
4. Creates output dataset
5. Marks job as completed
## Performance Considerations
- Processing time depends on dataset size and neighbor configuration
- Distance-based analysis may be slower for large datasets
- KNN-based analysis is generally faster
- Consider using materialized views for very large datasets
## Related Documentation
- [Hot Spot Analysis Tool](../analysis-tools/hotspot.md)
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

View File

@ -0,0 +1,93 @@
# Hot Spot Time Series Worker
Processes hot spot time series analysis jobs to analyze temporal patterns in hot spots.
## Overview
The hot spot time series worker performs hot spot analysis across multiple time periods to identify temporal patterns in spatial clustering.
## Job Type
`hotspot_timeseries`
## Input Parameters
```json
{
"dataset_id": 123,
"value_field": "population",
"time_field": "date",
"time_periods": ["2020", "2021", "2022"],
"neighbor_type": "distance",
"distance": 1000
}
```
### Parameters
- `dataset_id` (required): Source dataset ID
- `value_field` (required): Numeric field to analyze
- `time_field` (required): Field containing time period identifiers
- `time_periods` (required): Array of time period values to analyze
- `neighbor_type` (optional): "distance" or "knn" (default: "distance")
- `distance` (required if neighbor_type="distance"): Distance threshold
- `k_neighbors` (required if neighbor_type="knn"): Number of nearest neighbors
## Output
Creates a new dataset with time series hot spot results:
- Hot spot analysis for each time period
- Temporal patterns in clustering
- Time period identifiers
- Gi* z-scores and p-values for each period
## Algorithm
The worker:
1. Filters data by time period
2. Performs hot spot analysis for each period
3. Combines results with time period information
4. Stores results in output table
## Example
```bash
# Enqueue a hot spot time series job via API
curl -X POST "https://example.com/api/hotspot_timeseries_run.php" \
-H "Content-Type: application/json" \
-d '{
"dataset_id": 123,
"value_field": "population",
"time_field": "year",
"time_periods": ["2020", "2021", "2022"],
"neighbor_type": "distance",
"distance": 1000
}'
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `hotspot_timeseries` jobs
2. Validates input parameters
3. Performs hot spot analysis for each time period
4. Combines results
5. Creates output dataset
6. Marks job as completed
## Performance Considerations
- Processing time depends on dataset size and number of time periods
- Each time period requires separate hot spot analysis
- Consider limiting number of time periods for large datasets
- Results can be large for many time periods
## Related Documentation
- [Hot Spot Analysis Tool](../analysis-tools/hotspot.md)
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

89
docs/workers/index.md Normal file
View File

@ -0,0 +1,89 @@
# Workers Documentation
Background workers process long-running operations asynchronously, allowing the web interface to remain responsive.
## Overview
Workers are long-running PHP CLI scripts that:
- Poll the database for queued jobs
- Process jobs of a specific type
- Handle errors gracefully
- Log progress and results
- Run continuously until stopped
## Worker Architecture
All workers follow a similar pattern:
1. **Initialization**: Connect to database, verify connection
2. **Main Loop**: Continuously poll for jobs
3. **Job Processing**: Fetch, process, and complete jobs
4. **Error Handling**: Log errors and mark failed jobs
## Running Workers
Workers are designed to run as systemd services:
```bash
# Enable and start a worker
sudo systemctl enable hotspot_worker.service
sudo systemctl start hotspot_worker.service
# Check status
sudo systemctl status hotspot_worker.service
# View logs
sudo journalctl -u hotspot_worker.service -f
```
## Available Workers
```{toctree}
:maxdepth: 2
hotspot_analysis
outlier_analysis
nearest_analysis
dissolve
clip
raster_clip
create_view
erase_analysis
hotspot_timeseries
```
## Worker Configuration
Workers are configured via systemd service files in the `systemd/` directory. Each service file specifies:
- Working directory
- PHP executable path
- User/group to run as
- Restart behavior
- Resource limits
## Job Processing
Workers use the `background_jobs` table to manage jobs:
- **Enqueue**: Jobs are created with status 'queued'
- **Fetch**: Workers fetch jobs using `FOR UPDATE SKIP LOCKED`
- **Process**: Workers update status to 'running' and process
- **Complete**: Workers update status to 'completed' with results
- **Error**: On failure, status set to 'failed' with error message
## Monitoring
Monitor workers via:
- Systemd logs: `journalctl -u {worker_name}.service`
- Application logs: `logs/worker_{name}.log`
- Database: Query `background_jobs` table for job status
## Related Documentation
- [Architecture Overview](../architecture.md)
- [API Documentation](../api/index.md)
- [Analysis Tools](../analysis-tools/index.md)

View File

@ -0,0 +1,88 @@
# Nearest Analysis Worker
Processes nearest neighbor analysis jobs between two datasets.
## Overview
The nearest analysis worker finds the nearest features from a target dataset for each feature in a source dataset.
## Job Type
`nearest`
## Input Parameters
```json
{
"source_dataset_id": 123,
"target_dataset_id": 124,
"max_distance": 5000,
"limit": 1
}
```
### Parameters
- `source_dataset_id` (required): Source dataset ID
- `target_dataset_id` (required): Target dataset ID
- `max_distance` (optional): Maximum search distance in dataset units
- `limit` (optional): Maximum neighbors per feature (default: 1)
## Output
Creates a new dataset with nearest neighbor results:
- Original source feature geometry
- Nearest target feature information
- Distance to nearest neighbor
- Attributes from both source and target features
## Algorithm
The worker uses PostGIS functions to:
1. For each source feature, find nearest target features
2. Calculate distances using spatial indexes
3. Apply distance and limit constraints
4. Join attributes from both datasets
5. Store results in output table
## Example
```bash
# Enqueue a nearest analysis job via API
curl -X POST "https://example.com/api/nearest_run.php" \
-H "Content-Type: application/json" \
-d '{
"source_dataset_id": 123,
"target_dataset_id": 124,
"max_distance": 5000,
"limit": 1
}'
# Worker processes the job automatically
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `nearest` jobs
2. Validates input parameters
3. Executes PostGIS nearest neighbor queries
4. Creates output dataset
5. Marks job as completed
## Performance Considerations
- Processing time depends on dataset sizes
- Spatial indexes are critical for performance
- Large max_distance values may slow processing
- Consider limiting results per feature
## Related Documentation
- [Nearest Analysis Tool](../analysis-tools/nearest.md)
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

View File

@ -0,0 +1,94 @@
# Outlier Analysis Worker
Processes outlier detection jobs to identify statistical outliers in spatial data.
## Overview
The outlier analysis worker identifies features with values that are statistically unusual using z-score or MAD (Median Absolute Deviation) methods.
## Job Type
`outlier_analysis`
## Input Parameters
```json
{
"dataset_id": 123,
"value_field": "income",
"method": "zscore",
"threshold": 2.0
}
```
### Parameters
- `dataset_id` (required): Source dataset ID
- `value_field` (required): Numeric field to analyze
- `method` (optional): "zscore" or "mad" (default: "zscore")
- `threshold` (optional): Z-score threshold or MAD multiplier (default: 2.0)
## Output
Creates a new dataset with outlier analysis results:
- Original features marked as outliers
- Outlier score (z-score or MAD score)
- Outlier flag
- Original attributes preserved
## Methods
### Z-Score Method
Calculates standardized z-scores:
- Mean and standard deviation calculated
- Z-score = (value - mean) / standard_deviation
- Features with |z-score| > threshold are outliers
### MAD Method
Uses Median Absolute Deviation:
- Median calculated
- MAD = median(|value - median|)
- Modified z-score = 0.6745 * (value - median) / MAD
- Features with |modified z-score| > threshold are outliers
## Example
```bash
# Enqueue an outlier analysis job via API
curl -X POST "https://example.com/api/analysis/outlier_run.php" \
-H "Content-Type: application/json" \
-d '{
"dataset_id": 123,
"value_field": "income",
"method": "zscore",
"threshold": 2.0
}'
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `outlier_analysis` jobs
2. Validates input parameters
3. Calculates statistics (mean/std or median/MAD)
4. Identifies outliers
5. Creates output dataset
6. Marks job as completed
## Performance Considerations
- Processing time depends on dataset size
- Z-score method requires two passes (mean/std, then scoring)
- MAD method is more robust to outliers in calculation
- Consider filtering null values before analysis
## Related Documentation
- [Outlier Analysis Tool](../analysis-tools/outliers.md)
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

View File

@ -0,0 +1,87 @@
# Raster Clip Worker
Processes raster clip operations to extract raster data within a boundary.
## Overview
The raster clip worker extracts raster data that intersects with a clipping boundary geometry.
## Job Type
`raster_clip`
## Input Parameters
```json
{
"raster_dataset_id": 125,
"clip_geometry": {
"type": "Polygon",
"coordinates": [ ... ]
},
"output_dataset_id": 126
}
```
### Parameters
- `raster_dataset_id` (required): Source raster dataset ID
- `clip_geometry` (required): GeoJSON geometry for clipping boundary
- `output_dataset_id` (required): Output raster dataset ID
## Output
Creates a new raster dataset with clipped data:
- Raster data within the clipping boundary
- Original raster properties preserved
- Proper spatial reference maintained
## Algorithm
The worker uses PostGIS raster functions to:
1. Transform clipping geometry to raster SRID
2. Clip raster to boundary using `ST_Clip`
3. Store clipped raster in output table
4. Update raster metadata
## Example
```bash
# Enqueue a raster clip job via API
curl -X POST "https://example.com/api/raster_clip_run.php" \
-H "Content-Type: application/json" \
-d '{
"raster_dataset_id": 125,
"clip_geometry": {
"type": "Polygon",
"coordinates": [[[-180, -90], [180, -90], [180, 90], [-180, 90], [-180, -90]]]
},
"output_dataset_id": 126
}'
```
## Background Jobs
This analysis runs as a background job. The worker:
1. Fetches queued `raster_clip` jobs
2. Validates input parameters
3. Executes PostGIS raster clip operations
4. Creates output raster dataset
5. Marks job as completed
## Performance Considerations
- Processing time depends on raster size and boundary complexity
- Large rasters may require significant memory
- Consider resampling for very large rasters
- Clipping boundaries should match raster resolution
## Related Documentation
- [Raster Tools](../analysis-tools/raster.md)
- [Analysis API](../api/analysis.md)
- [Workers Overview](index.md)

8
pyproject.toml Normal file
View File

@ -0,0 +1,8 @@
[build-system]
requires = ["flit_core >=3.2,<4"]
build-backend = "flit_core.buildapi"
[project]
name = "GeoLite"
authors = [{name = "AcuGIS", email = "hello@citedcorp.com"}]
dynamic = ["version", "description"]