[00:19:45] <dstorrs> libbyh: check ping on machine, check firewalls, verify you can ssh in, verify mongod running on target machine
[00:20:40] <libbyh> i think my problem is a firewall issue. ping OK, mongo via SSH ok, some ports OK. no response on 27017 or 28017 even though they're in my iptables
[00:21:16] <jY> is mongo listening to only 127.0.0.1?
[00:21:25] <libbyh> and show listening in netstat. i could connect when i was in the same building as the server but not from home. (not sure how much overlap btw IP of server and IP when I was able to connect.)
[01:39:20] <dstorrs> you could do it time-major, but this is fine.
[01:39:42] <dstorrs> then set a desc index on the 'time' column.
[01:40:30] <dstorrs> once a minute, do a 'find' for users in time >= 30_mins_ago. mail everyone you find. use a capped collection so you don't need to worry about storage
[01:41:14] <dnnsmanace> that makes sense, and that shouldnt choke up the app with lots of users
[02:26:31] <macrover> I have a question about Timestamp conversion here: http://pastie.org/4163385
[02:26:37] <macrover> looking for pointers, thanks
[03:03:29] <lizzin> why doesn't `show dbs` from within the mongo client show collections created by my liftweb app using net.liftweb.record?
[03:07:16] <lizzin> instead several files have been created in dbpath/. such as collection.0, collection.1, and collection.ns
[03:16:36] <Ahlee> lizzin: appears your collection name is collection, those files are the preallocated files for the collection namespace
[03:17:09] <Ahlee> lizzin: You should be able to run db.collection.find().foreach(printjson) to see what's being inserted
[03:18:41] <Ahlee> Does this mean I didn't get this field indexed, or it didn't get inserted? from my log file: Wed Jun 27 23:04:40 [rsSync] ibp.logging Btree::insert: key too large to index, skipping ibp.logging.$message_1_background_ 1362 { : "SurVo - survey():
[03:22:07] <lizzin> Ahlee: you're right, i changed the name of the collection to 'collection. the real name is 'phonebook'
[03:25:12] <lizzin> Ahlee: i was hoping to add documents to the collection from within my app and then connect to the db via the mongo client to verify things are going as expected
[03:26:04] <lizzin> Ahlee: shouldn't `show dbs` list all db's?
[03:26:07] <Ahlee> lizzin: ok, so there are dbs, and collections in the db, you've issued use <db_name>, db.<collecton>.count()
[03:40:16] <Ahlee> yeah, that's what happened on my system when I just did it, I wound up with a database foobar, a collection in foobar database named foobar, with a single document consisting of an ObjectID, and name: "ummyeah"
[03:44:59] <dnnsmanace> what format is this date: 2012-06-28T03:39:45.768Z
[03:46:44] <AAA_awright> What's the exact rule on . in property names? I have several documents with Object-hashtables of {"url": {information...}} and that doesn't seem to be a problem, except recently
[03:48:06] <dnnsmanace> whats the best way to compare ISO-8601 date and a date like this Thu, 28 Jun 2012 03:42:06 GMT
[03:49:22] <dstorrs> stupid question -- how do I find out what DB I'm currently in?
[03:50:06] <dstorrs> dnnsmanace: convert them to a single format. compare. enjoy a milkbone in a commie-free world phase
[03:50:10] <AAA_awright> dnnsmanace: Use a date-time library, most of them should be able to convert arbritrary strings to dates... Javascript has the Date object, look at the documentation for that for the speciics
[04:07:55] <AAA_awright> What's the exact rule on . in property names? I have several documents with Object-hashtables of {"url": {information...}} and that doesn't seem to be a problem, except recently
[04:08:01] <Guest1231231> or like usually: brain disabled?
[04:08:23] <wereHamster> AAA_awright: except recently? Can you elaborate on that?
[04:08:37] <AAA_awright> Now I'm getting a domain error, lemme see exactly
[04:08:46] <AAA_awright> "Error: Server Error: not okForStorage"
[04:08:48] <Ahlee> so mgd picking it up as a sub field
[04:09:09] <Ahlee> like "foo.bar" for {foo : { bar: 1, baz: 1}} ?
[04:13:38] <AAA_awright> Guest1231231: If you're going to be rude don't bring it into private messages. As an author of a program, I need to know the behavior of MongoDB so I can code accordingly for my program. I'm asking for that behavior, since appearently it's not documented. Got it?
[04:15:13] <AAA_awright> Ahlee: As far as I can tell, no. It's the actual key name. Unless MongoDB collapses sub-objects into parents like {a:{b:1}} -> {"a.b":1}, but I wouldn't expect that
[04:15:58] <Ahlee> AAA_awright: Sadly, I can't speak definitively on that. I believe that "a.b" is a short cut for {a:{b:1}}
[04:16:10] <Ahlee> but, I'm too green to be confident.
[04:18:18] <AAA_awright> Ahlee: There's nothing I can do to make it spit out {"a.b.c":1}, it looks like it's only used in queries
[04:22:09] <deoxxa> AAA_awright: {"a.b.c": 1} means exactly what it looks like - it would translate to pseudocode of `if (a.b.c === 1) { ... }'
[04:22:21] <deoxxa> (with some guards around undefined values, etc)
[04:23:17] <deoxxa> unless that's not what you're asking
[04:23:28] <deoxxa> i'm not quite sure - you're not making a whole lot of sense unfortunately :(
[04:24:21] <AAA_awright> I've never had a problem inserting documents with properties containing "." until about 3 days ago
[04:24:37] <AAA_awright> I have several documents containing "." in property names
[04:24:45] <AAA_awright> I'm trying to figure out *how*
[04:25:05] <AAA_awright> Or rather, why I'm getting errors now and not earlier
[04:25:05] <deoxxa> ok, that's a better explanation
[04:25:29] <AAA_awright> But really I'm just looking for documentation on the subject
[04:25:38] <AAA_awright> But appearently no one's heard of any such documentation
[04:25:49] <AAA_awright> So now I'm thinking this is a bug?
[04:25:58] <Guest1231231> the bug is sitting in front of the keyboard
[04:26:23] <Ahlee> Keep on being helpful there big shooter.
[04:26:32] <deoxxa> ok, so after a cursory google search of "mongodb key dot"
[04:26:33] <AAA_awright> Guest1231231: Do you have an answer to my question to offer?
[04:26:38] <Max-P> Is it possible to update an associative array with a key that doesn't exist in an existing document? I need to {$set: {'folder.my_user_id': 'INBOX'}} but it seems to fail when the field is not already set =/ Thanks
[04:26:42] <deoxxa> i see https://jira.mongodb.org/browse/JAVA-151
[04:27:02] <Guest1231231> i tend to ignore questions from people that can not read other persons answers if they are perfectly correct
[04:30:08] <AAA_awright> Guest1231231: You asked "do you have a point?" before any other person even acknowledged my question. Again, rephrased, do you have something useful to contribute?
[04:30:30] <deoxxa> looks like it's never been valid, but there's been inconsistent driver support for stopping you from doing it
[04:31:07] <deoxxa> it doesn't *technically* get rejected at the database, but only to allow maximum flexibility for drivers implementing weird crap, by the looks of things
[04:31:32] <deoxxa> sane drivers will reject it, or at least strongly advise you not to do it
[04:31:37] <AAA_awright> I *assumed* it was done at the the MongoDB level
[04:31:51] <AAA_awright> or rather, the server level
[04:32:06] <deoxxa> maybe it does now, but as far as i know it never was before
[04:32:13] <deoxxa> as long as you were talking to it at a low enough level
[04:39:55] <deoxxa> well, it's a bad enough idea that drivers actively try to stop you from doing it
[04:40:05] <deoxxa> so maybe time to re-think that design
[04:42:01] <AAA_awright> Except the next-best design I have is... escape the dots in the URIs
[04:42:54] <AAA_awright> There's no reason not to attach this information to a document, it makes sense to have an Object of { URI : {...} }
[04:44:13] <AAA_awright> I can think of other use cases, storing link-level data about all the URLs you reference on a page. You want to know where they are, what version of the page they're linking to, etc.
[05:30:26] <ferrouswheel> have people come up with a way to kill a foreground indexing?
[05:31:50] <ferrouswheel> and it brings everything to a crashing halt. seems like a retarded design that background=False is default.
[05:35:38] <ferrouswheel> issue here: https://jira.mongodb.org/browse/SERVER-3067 seems like a big issue, i'm not the only one it's brought down a site for.
[05:36:02] <ferrouswheel> and you can't even cleanly shutdown the server either
[05:45:18] <ferrouswheel> kill -9 mongod it is then! ;-p
[08:28:47] <LambdaDusk> Hi, I have seen that mongoose uses "Buffer" as a type for fields, I guess it's a BSON byte buffer... what is the use of that data type?
[08:30:49] <kali> LambdaDusk: store binary data ? :)
[08:31:23] <LambdaDusk> kali: But what makes Buffer better for this than String?
[08:32:55] <kali> assumptions are made on string by mongo and the driver. they have to be utf8 encoded for instance
[08:33:25] <kali> so if you try to store an arbitrary byte array in mongo as a string, you'll get an error
[09:10:08] <Gargoyle> got one of them back online, and the one I want to keep was promoted back to PRIMARY (I did originally shut down the secondaries).
[09:12:09] <Gargoyle> Nope. It drops to secondary as soon as it's the only one left. :(
[09:14:48] <Gargoyle> ahh. I have to remove the other members from the config.
[10:52:44] <neil__g> @NodeX I have a P-S-S replica set and I do a query on one of the servers. $res->count() returns the correct result count, but if I foreach() over $res it never enters the loop. Seems to be related to the secondaries being out of sync by an hour or so.
[11:11:22] <jamiel> does $res->count(true) return the correct result?
[11:48:36] <pilgo> kali: I just created a date created attribute for my collections and am wondering how I would migrate the existing documents to have a date. Any tips?
[11:48:40] <ron> NodeX: at least kali doesn't use PHP.
[11:52:38] <pilgo> Am I guaranteed to get the same order of documents if I sort on a date field and they all have the same date?
[11:52:56] <ron> you care about the order of the documents? O_O
[11:53:46] <kali> ron: i'm not sure it is a sufficient condition
[11:53:50] <pilgo> ron, care only in the sense that I'd rather not them be sorted willy-nilly
[11:54:47] <NodeX> all languages have thier bad points - if you stay clear of them then you're fine
[11:55:21] <kali> yeah, let's kill the troll before it devoures us
[11:56:39] <balboah> is there a way to check consistency of a database that is synced in a replicaset? For example to see that some data that should be there isn't?
[11:57:10] <NodeX> only by making sure it's written at write time iirc
[11:58:14] <ron> pilgo: sorting should be done on specific fields. how they are stored in the database shouldn't matter (that much).
[11:58:20] <balboah> maybe it's possible to make it resync specific db's?
[11:58:35] <kali> balboah: nope, replication is server wide
[11:58:52] <balboah> allright, I'll just clear it to 0 and get everything
[11:58:56] <pilgo> ron: Right, I mean what the db returns on find.
[14:44:09] <cemalb> Hi. I have a replica set on EC2 w/ a primary, secondary and arbiter. We restarted all 3 individually and now get the error "not master or secondary, can't read" when trying to query the primary or secondary. rs.status() shows "loading local.system.replset config (LOADINGCONFIG)". What should be my next step in getting these back online?
[14:45:12] <cemalb> Also rs.initiate() gives "local.oplog.rs is not empty on the initiating member. cannot initiate."
[14:55:46] <cemalb> There's an excerpt from the log
[14:56:46] <sharp15> how detailed is mongodb's permissions/access tracking? for instance. can i give a group of people access to a database and mongo will note who created (and when) which documents?
[14:58:09] <kali> cemalb: everything clear on the hostname front ? because "replSet error self not present in the repl set configuration" sounds like the replica can not find itself in the set
[14:59:21] <cemalb> Yeah, I was wondering about that. The hostname is mw-rs1-data1 which is in there though
[15:02:48] <kali> cemalb: can you check also that "host" in db.serverStatus() is consistent with what you expect ?
[15:06:10] <cemalb> kali: That looks correct.. "host" : "mw-rs1-data1"
[15:08:08] <kchodorow_> cemalb: can you start a mongo shell on mw-rs1-data1 by running: "mongo mw-rs1-data1:27017/test"
[15:09:36] <augustl> need a field in my db, "api token", some kind of UUID would do. Should I just generate my own, or does mongo have something built-in that ensures uniqueness etc?
[15:18:44] <cemalb> kchodorow_, kali: Welp, it was in fact a problem with the host name. It was mapped to the wrong IP. Everything's working now. Thanks for the help!
[15:54:46] <JoeyJoeJo> I've got documents that include lat/lon. How can I find all documents in a collection that are some distance from a given lat/lon?
[15:56:49] <Derick> a maximum distance you mean? or more than a certain distance?
[15:57:31] <JoeyJoeJo> I want to find anything within x kilometer radius of a coordinate
[16:00:00] <Ahlee> Does this mean I didn't get this field indexed, or it didn't get inserted? from my log file: Wed Jun 27 23:04:40 [rsSync] ibp.logging Btree::insert: key too large to index, skipping ibp.logging.$message_1_background_ 1362 { : "SurVo - survey(): <snip>
[16:00:38] <drudge\work> JoeyJoeJo: check out http://www.mongodb.org/display/DOCS/Geospatial+Indexing
[16:01:43] <JoeyJoeJo> drudge\work: I was just reading that actually. I have my lat and lon stored as db.collection.lat and db.collection.lon. Do I need to store them in one field?
[16:57:15] <souza> I have the following bson in C language (http://pastebin.com/ZzXtdPSM) , it must return one result from mongodb, but it doesn't returns anything, and i've no idea how to fix it, thanks!
[16:58:23] <souza> the "vm" object is a sub object, from another
[17:09:13] <multiHYP> I did it first with find() and that does not have length method.
[17:12:02] <multiHYP> migration is a pain, I don't know if its worth combining 3 collections into 1 that would fit with my current new model. analytics is always relevant though.
[17:12:12] <stefania> elemMatch? as documented :-P
[17:43:20] <halcyon918> hey folks… I've got a question about rep sets… if I set the WriteConcern to REPLICAS_SAFE, and one replica gets the update, but the others do not… does Mongo rollback the save on the first replica so it's as if it never was saved to the first replica?
[17:47:14] <stefania> mongodb and rollback is a contradiction
[17:51:06] <kali> stefania: when a replica is primary and get splitted, it will rollback the last few write he got before becoming aware he was splitted
[17:51:49] <kali> halcyon918: as far as i know, this is the only case that can lead to rollback: resycing a split replica set
[18:01:11] <dstorrs> hey all. I just connected to my develop data and all data is gone. Yet, db.coll.totalSize() has not changed.
[18:01:30] <dstorrs> this isn't a critical fail -- it's not prod data -- but I'm confused.
[18:02:03] <dstorrs> we're trying to see if the vanishing data was something we did or a Mongo issue. what could cause it aside from developer error?
[18:03:25] <dstorrs> also, am I correct that when a remove() is done, the disk-state of the the data is marked 'deleted' (much like 'rm filename'), but it does not actually change the totalSize? for that you would need to run repairDatabase to compact, yes?
[18:09:35] <souza> Hello guys, i'm having problem to compare dates with mongodb and C, this is my output http://pastebin.com/59yuTimT It was returning all services in my database, but must to return only the services that have date less than "1340906127000" (date in milliseconds), and the last record, have a date (in milliseconds) 1340906187000, and didn't must be returned, someone know how can i fix it?
[18:11:37] <dstorrs> souza: I can't make heads or tails of that output.
[18:11:47] <dstorrs> Can you post an example of two of the docs in your services collection?
[18:16:57] <Ahlee> Does this mean I didn't get this field indexed, or it didn't get inserted? from my log file: Wed Jun 27 23:04:40 [rsSync] ibp.logging Btree::insert: key too large to index, skipping ibp.logging.$message_1_background_ 1362 { : "SurVo - survey(): <snip>
[18:27:20] <halcyon918> kali: sorry for the late response about the replica set stuff… but does that mean that the primary will keep the write even though the other replicas didn't get it? I'm confused about the implications of a REPLICA_SAFE partial save
[18:27:28] <dstorrs> souza: perfect. reading it now.
[18:29:15] <dstorrs> souza: ok, so for each 'Services' object the 'vm' key has only one entry under it, yes?
[18:29:29] <dstorrs> or would it ever be an array?
[18:30:40] <souza> i have an array of vms, but i want to get all services with only the vms that have a date $lt the date informed
[18:30:49] <dstorrs> if it's always exactly one, this will work: db.Service.find({ 'vm.last_date' : { $lt : THE_DATE_YOU_CARE_ABOUT } })
[18:32:24] <souza> dstorrs: Ok, that i'm doing this in C language, but my query it's something like this, but it returns a date $gt yet
[18:32:51] <souza> dstorrs: sorry by poor english :(
[18:32:52] <dstorrs> if it gives you grief about 'last_date' sometimes being there and sometimes not, you can do this: db.Service.find({ $and : [ $exists : { 'vm.last_date' : true }, 'vm.last_date' : { $lt : THE_DATE } ] })
[18:33:21] <dstorrs> souza: no worries. my russian (?) is far worse.
[18:33:28] <multiHYP> football calling, seeyalater. :)
[18:34:53] <souza> dstorrs: my problem not is in exists or not, my problem is that my query it's retrieving a greater than that value, must i want only the less than
[18:37:42] <souza> dstorrs: like in this paste http://pastebin.com/59yuTimT you can se in the first line the time in milliseconds, and the last record have a value greater that one.
[18:39:47] <dstorrs> souza: please check that your query actually matches what I posted. Because what I saw in your code did not.
[18:40:59] <dstorrs> you were doing { 'vm' : { $lt : DATE }} when you needed {'vm.last_date' : { $lt : DATE}}
[18:47:53] <dstorrs> I am not familiar with the C driver, but I am willing to bet money that you have still not accurately translated what I told you to use.
[18:48:54] <dstorrs> becaue as far as I can see, your query is now this: { vm : { $lt : 'vm.last_date' : Date }} which is not even syntactically correct
[18:51:33] <souza> dstorrs: and what is the correct query?
[18:51:56] <dstorrs> souza: I've posted it several times now. Please scrollback and read.
[18:52:46] <souza> db.Service.find('vm.last_date' : { $lt : 1340908730000 }] }); This? it was returning error. :(
[18:53:09] <dstorrs> unsurprising. it's not syntactically correct
[18:54:14] <dstorrs> souza: I'm trying to be nice about this, but I'm on three hours sleep and I'm starting to feel like I'm talking to a wall, because I keep posting the exact query and you keep not using it.
[18:54:21] <Derick> you need to send a javascript object to find()
[18:54:33] <Derick> not some stuff separated by a :
[18:54:52] <Derick> each object starts with a { , then has properties, and then a }
[18:55:04] <dstorrs> One last time. Do this: db.Service.find({ 'vm.last_date' : { $lt : 1340908730000 } })
[18:55:05] <Derick> a property is a keyname, followed by a : followed by a value
[18:55:20] <dstorrs> do it in the shell. not in C. it if returns the right thing, then translate to C.
[18:55:34] <souza> dstorrs: sorry, i'm over with this question.
[18:56:07] <souza> dstorrs: i'll try a little more, because this one doesn't returns anything
[18:56:28] <Derick> because it's syntactically incorrect...
[18:56:33] <Derick> souza: did you read what I wrote?
[18:58:46] <dstorrs> thanks for helping Derick. maybe that was the concept I was not getting across to him.
[18:59:31] <Derick> dstorrs: i was thinking about just writing it in BNF though
[19:01:19] <dstorrs> Derick: I don't know if that would have been better or not. it seems that he had never really RTFM'd and had no mental model of Mongo queries. I suspect he was copy pastaing
[19:04:51] <souza_> dstorrs: thanks for help, i don't get it yet, but i think that i have a way now!
[19:05:17] <dstorrs> souza_: you're welcome and good luck.
[19:06:19] <dstorrs> souza_: if you haven't done so, I strongly recommend sitting down and reading absolutely everything under http://docs.mongodb.org/manual/#for-developers
[19:06:44] <dstorrs> it will take several hours. It will save you several days over the next few weeks.
[19:35:02] <FerchoDB> Hi, We're doing some tests with TTL collections. We are trying the example on blog.mongodb.org but it doesn't remove the document at 30 seconds, it lasts much more before removing it
[19:35:13] <FerchoDB> do you know why this can be happening?
[19:59:57] <FerchoDB> MongoDB 2.1.2 lasts 44 seconds to remove 30 second TTL documents. Is 15 seconds an accepted bias?
[20:00:52] <php10> so i'm trying to build an aggregated stats collection. takes every combination of the hit/sale metadata such as user, site, lander, geoip country, geoip region, http referrer, etc. and then has a bunch of counters raw hit, index hit, unique hit, leads, etc. i need to query any of these fields and then be able to break down by any of these fields. is storing all of these combinations the best
[20:00:53] <php10> way to do this? it seems that after even a day of traffic collection the collection becomes massive
[20:01:33] <spikie> I use doctrine mongodb odm, and when I try to make a map reduce with inline option, mongo doesn't return me an'results' as expected, but a 'result' <collection_name>.... Some one could help me
[20:02:06] <php10> but it seems that in order to keep any of the fields associated in order to breakdown the data, the only way i can aggregate are on unique combinations
[20:02:07] <dnnsmanace> how do i combine a regular update with $push?
[20:15:46] <FerchoDB> now MongoDB 2.1.2 lasts about 90 seconds to remove 30-second TTL documents. ¿? is this because it's still unstable or because I'm missing something?
[20:18:17] <charnel> hi when I try to connect to mongolab with the uri I am getting unable to authenticate user mongodb which is the protocol in the url. Anyone knows how to set the url
[20:26:20] <dnnsmanace> hi i am getting a dup key error even though the key should be the unique id generated on creation of document
[22:07:46] <tystr> yeah, looks like snapshots is the way to go
[22:12:13] <tomlikestorock> Is it better to store fewer fat rows with a lot of data in a row field, or is it better to split that field up into individual rows and store many smaller rows in a collection?
[22:12:47] <tomlikestorock> by rows I mean documents, I guess
[22:26:41] <kenneth> where the string is hex-encoded bson data
[22:26:50] <kenneth> notice all the trailing null bytes?
[22:27:18] <kenneth> i'm encoding this using the C lib for bson as provided in the mongodb c driver
[22:29:28] <algernon> I assume it is so, because the bson has a length specifier in the beginning, and the php driver ignores any junk past that length
[23:06:43] <php10> question for php mongo users. trying to increment a field in an embedded document. driver is building the $inc as follows: ['$inc']['sums']['hits'] 1 but i'm receving "Modifier $inc allowed for numbers only" ... note: i'm doing this on upsert. it works with ['$inc']['hits'] 1
[23:08:04] <kenneth> algernon: well, i'm more curious why the C lib is generating a bunch of extra junk and making my payload significantly bigger
[23:10:30] <php10> my upset inc array right from the driver: http://pastebin.com/8JnjbBtH
[23:12:32] <php10> oh i think i know, the $inc needs to be on each of the embedded fields
[23:22:00] <php10> nope... dot notation was the answer