This note explains what security telemetry really means and why different log sources matter. The goal is to understand where visibility comes from before worrying about detections and alert quality.
Official Microsoft Sentinel screenshot showing the data area where telemetry sources are connected and reviewed.
Why this matters
you cannot detect what you do not collect
endpoint, network, authentication, and infrastructure sources all show different parts of the story
better investigations start with better telemetry coverage, not just more rules
Environment / Scope
Item
Value
Topic
log sources and telemetry
Best use for this note
understanding visibility coverage
Main focus
endpoint, network, auth, infrastructure logs
Safe to practise?
yes
Key concepts
Telemetry - technical data collected from systems, services, and network activity
Log source - a system or service that produces data for monitoring
Coverage - how much useful visibility you actually have across the environment
Context - the extra detail that makes a log useful during investigation
Mental model
Different sources answer different questions:
Source type
What it helps answer
Endpoint logs
what happened on the host?
Authentication logs
who tried to sign in and from where?
Network telemetry
what connections and traffic patterns existed?
Infrastructure or platform logs
what did the service, appliance, or platform do?
One alert often becomes much stronger when multiple sources support the same story.
Everyday examples
Source
Example value
Endpoint telemetry
Sysmon events from Windows
Linux logs
auth logs, process logs, service logs
Network telemetry
Zeek logs or firewall logs
Infrastructure logs
UniFi syslog, cloud platform logs
Common misunderstandings
Misunderstanding
Better explanation
”One good log source is enough”
strong investigations often need multiple viewpoints
”More logs always means more security”
visibility improves only if the data is relevant and usable
”If the tool collects data, the field quality must be good”
collection and good parsing are different things
”Detection problems always mean bad rules”
sometimes the real issue is weak or missing telemetry
Verification
Check
Expected result
Sources are connected
logs arrive from expected systems
Timestamps are usable
event times are consistent enough to correlate
Fields are meaningful
data contains host, user, process, or network context
Coverage matches use case
the environment has telemetry for the detections you care about
Pitfalls / Troubleshooting
Problem
Likely cause
What to check
Alerts feel blind or weak
poor source quality
field quality, missing context
Correlation fails
timestamps or host identity are inconsistent
time sync, naming, identifiers
Detections never fire
source missing or wrong parser
ingestion, mapping, rule fields
Investigations feel shallow
too little telemetry diversity
endpoint plus network plus auth coverage
Key takeaways
telemetry quality matters as much as detection quality
different log sources answer different investigation questions
coverage and context are what turn raw logs into useful security evidence