Преглед изворни кода

HPCC-23398 Provide containerized HPCC log example

- Provides containerized HPCC logging information via README
- Provides hpcc log processing via Elastic Helm information
- Provides hpcc log processing via Azure AKS Insights info

signed-off-by: Rodrigo Pastrana <rodrigo.pastrana@lexisnexisrisk.com>
Rodrigo Pastrana пре 5 година
родитељ
комит
c7401854a8

Разлика између датотеке није приказан због своје велике величине
+ 57 - 0
helm/examples/logging/README.md


+ 32 - 0
helm/examples/logging/azure/README.md

@@ -0,0 +1,32 @@
+# HPCC Log Processing via Azure's AKS Insights
+
+Azure's AKS Insights is an optional feature designed to help monitor performance and health of kubernetes based clusters. 
+Once enabled and associated with a given AKS with an active HPCC System cluster, the HPCC component logs are automatically captured by Insights, since all STDERR/STDOUT data is captured and made available for monitoring and/or querying purposes. As is usually the case with cloud provider features, cost is a significant consideration and should be well understood by the user. Log content is written to the logs store associated with your Log Analytics workspace.
+
+The AKS Insights interface on Azure provides Kubernetes-centric cluster/node/container-level health metrics visualizations, and direct links to container logs via "log analytics" interfaces. The logs can be queried via “Kusto” query language (KQL). 
+
+    Example query for Transaction summary log entries from known ESP component container
+    let ContainerIdList = KubePodInventory
+    | where ContainerName =~ 'xyz/myesp'
+    | where ClusterId =~ '/subscriptions/xyz/resourceGroups/xyz/providers/Microsoft.ContainerService/managedClusters/aks-clusterxyz'
+    | distinct ContainerID;
+    ContainerLog
+    | where LogEntry contains "TxSummary["
+    | where ContainerID in (ContainerIdList)
+    | project LogEntrySource, LogEntry, TimeGenerated, Computer, Image, Name, ContainerID
+    | order by TimeGenerated desc
+    | render table
+
+    Sample output
+    > 6/20/2020, 1:00:00.244 AM	stderr	1 TxSummary[activeReqs=6 auth=NA contLen=352 rcv=0ms handleHttp=3ms user=@10.240.0.4 req=POST wsstore.SET v1.0 total=3ms ] 	aks-default-12315622-vmss00000i			bc1555515a09e12a129c3ea5df0b76fb74c4227354dc2b643182c8f910b33ed4	
+	  > 6/20/2020, 12:59:58.910 AM	stderr	1 TxSummary[activeReqs=5 auth=NA contLen=99 rcv=0ms handleHttp=2ms user=@10.240.0.5 req=POST wsstore.FETCH v1.0 total=2ms ] 	aks-default-12315622-vmss00000i			bc1555515a09e12a129c3ea5df0b76fb74c4227354dc2b643182c8f910b33ed4
+
+More complex queries can be formulated to fetch specific information provided in any of the log columns including unformatted data in the log message. The Insights interface facilitates creation of alerts based on those queries, which can be used to trigger emails, SMS, Logic App execution, and many other actions.
+
+Log and/or metric capture behavior can be controled via kubernetes yaml:
+
+    https://github.com/microsoft/OMS-docker/blob/ci_feature_prod/Kubernetes/container-azm-ms-agentconfig.yaml
+    
+Overly chatty streams can be filtered out, capturing K8s events can be turned off, etc.
+Always keep in mind sensitive data could be logged, therefore access restriction to insights is strongly advised
+

+ 32 - 0
helm/examples/logging/elastic/README.md

@@ -0,0 +1,32 @@
+# HPCC Log Processing via Elastic Stack
+
+Setting up a base Elastic Stack cluster to process HPCC Systems component logs is straightforward. Elastic provides Helm charts to deploy each of their components, so we'll add the Elastic helm-charts repository locally:
+
+	> helm repo add elastic https://helm.elastic.co
+
+We'll install the Filebeat component (log agent) , and ElasticSearch (log store and indexer). By default, Filebeat will forward the log entries to the ElasticSearch default endpoint:
+
+    > helm install filebeat elastic/filebeat
+    > helm install elasticsearch elastic/elasticsearch
+
+Finally, the Kibana component can also be installed to be used as a front end, which allows log index management, log querying, and visualization:
+
+    > helm install kibana elastic/kibana
+
+Each of the Elastic components should be configured appropriately based on the cluster needs, detailed documentation can be found on the elastic GitHub page https://github.com/elastic/helm-charts/
+
+Inspecting the Elastic pods and services, expect a Filebeat pod for each of the nodes in your cluster, a configurable number of ElasticSearch pods, and a service for both Kibana and ElasticSearch. 
+
+Of utmost importance are the persistent volumes created by ElasticSearch on which the log indexes are stored. Review the ElasticSearch helm-charts GitHub page for details on all available configuration options.
+
+Port forwarding might be required to expose the Kibana interface
+
+    > kubectl port-forward service/kibana-kibana 5601
+
+Once all the components are working successfully, the logs will be written to a filebeat prefixed index on ElasticSearch, and it can be managed from the Kibana interface. Some index content retention policy rules can be configured here.
+
+The default filebeat configuration aggregates several Kubernetes metadata fields to each log entry forwarded to ElasticSearch. The Kubernetes fields can be used to identify the HPCC component responsible for each entry, and the source pod and or node on which the event was reported. 
+
+<kibana discovery page screenshot>
+  
+Kibana discovery page showing several HPCC Systems component log entries. All other log entries not created by HPCC components are filtered out by the filter at the top left corner. The Kubernetes container name, node name, and pod name accompany the “message” field which contains the actual log entry.