PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Thursday the 19th of June, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:04:02] <justinsd> How can I make my current box primary
[01:10:51] <joannac> justinsd: stepDown() on the other node?
[01:42:58] <raphael> hello all
[01:43:22] <raphael> I'm trying to setup a TTL on a collection and not having a lot of luck
[01:43:33] <raphael> mongodb version is 2.4.9
[01:43:40] <raphael> collection isn't capped
[01:43:57] <raphael> index is of type Date() and not compound
[01:44:11] <raphael> { "v" : 1, "key" : { "updated_at" : 1 }, "ns" : "cloudworkflow_production.temp_tasks", "name" : "updated_at_1", "expire_after_seconds" : 1800 }
[01:44:39] <raphael> and yet I see old documents that haven't been deleted a few hours later (TTL is 1/2 hour(
[01:44:41] <raphael> and yet I see old documents that haven't been deleted a few hours later (TTL is 1/2 hour)
[01:45:28] <raphael> > db.temp_tasks.find({},{"updated_at":1}).limit(1)
[01:45:29] <raphael> { "_id" : ObjectId("53584b8bedaf89e7c200039d"), "updated_at" : ISODate("2014-04-23T23:23:55.268Z") }
[01:46:04] <raphael> I'm running out of ideas, perused the usual source of information but couldn't find anything that would explain what I'm seeing (or not seeing)
[02:14:19] <joannac> logs show the TTL monitor thread running?
[03:09:48] <umquant_> Any experienced time series mongodb users have some wisdom to share with someone looking to move from mysql to mongo for real time sensor data
[05:56:45] <ns5> Hi, my mongodb can't start, could anyone help? This is the mongod log: http://ur1.ca/hk6tm
[06:05:35] <joannac> weeeeeird
[06:05:49] <joannac> can you try starting again and add -vv for more verbosity?
[06:06:38] <joannac> actually, more v's
[06:10:49] <ns5> joannac: http://ur1.ca/hk6vn
[06:12:03] <joannac> um, also with all the other startup parameters...
[06:15:23] <ns5> joannac: I don't know how, I just start the mongod via ubuntu start script
[06:18:27] <ranman> ns5: how much diskspace is left on your device?
[06:19:07] <ns5> ranman: 47G left
[06:19:38] <ns5> log of mongod -vvv: http://ur1.ca/hk6zc
[06:20:51] <ranman> also on the partition that the journal is stored on? that looks like it had an error trying to allocate a file :/ which would be an issue around space or file descriptors
[06:21:36] <joannac> looks like a dodgy database
[06:21:47] <joannac> 2014-06-19T14:16:51.436+0800 [IndexRebuilder] opening db: ▒Y
[06:22:26] <ranman> ignore me
[06:22:31] <ranman> I was looking at the wrong log file
[06:22:48] <joannac> ns5: what's that database?
[06:24:13] <ns5> joannac: user accounts, posts, user groups etc..
[06:24:54] <joannac> why is it called <funny character>Y?
[06:25:07] <joannac> also, backups?
[06:25:11] <joannac> replica set?
[06:26:27] <joannac> my hypothesis is that is you moved all the files for that database into a different folder, mongod would start
[06:30:13] <ns5> joannac: http://ur1.ca/hk71n move what file to where
[06:40:28] <joannac> those are all the files in that directory?
[06:40:34] <joannac> no hidden files?
[06:40:45] <joannac> ls -lah
[06:43:53] <ns5> joannac: no hidden files. I moved the db files to another folder and mongodb can start
[06:44:27] <ns5> joannac: so mongodb can't handle non-ascii characters for db names?
[06:44:53] <joannac> it can
[06:45:01] <joannac> still not sure what happened in your case
[06:46:16] <joannac> looks like you had an empty collection name
[06:46:17] <garmet> Hello! Anyone here tried mongodb on embedded? If so, any good/bad experiences?
[06:46:31] <joannac> garmet: um, not officially supported afaik
[06:46:42] <joannac> wait, maybe that was just raspberry pi
[06:47:31] <garmet> I've seen some guides for running mongo clustered on multiple pi's
[06:47:59] <joannac> garmet: from where?
[06:50:59] <garmet> joannac, some wordpress guy. Could not find the it atm.
[06:52:10] <garmet> joannac, https://mongopi.wordpress.com/ this guy
[06:54:22] <aravchr> hi
[06:54:35] <aravchr> is der any way to store php objects in mongo db
[06:54:36] <aravchr> ??
[06:54:45] <aravchr> is der anyway to store php objects in mongo db
[06:55:36] <aravchr> is der anyway to store php objects in mongo db
[07:52:30] <Qalqi> db.users.findOne({'services.facebook.id':123456788})
[07:52:39] <Qalqi> can someone help me make this work?
[07:59:55] <rspijker> Qalqi: what type is that field?
[08:00:08] <rspijker> I’m guessing it’s a string. You’re not searching for a string...
[08:13:48] <arussel> in an aggregate, is there a way to cast from int float to string to be used in a concat ?
[08:22:45] <Qalqi> rspijker: sting
[08:22:48] <Qalqi> string
[08:27:28] <arussel> using 2.6
[08:30:24] <Qalqi> rspijker:
[08:30:28] <Qalqi> u still there?
[08:30:34] <rspijker> yeah
[08:30:38] <rspijker> search for a string instead of a number
[08:30:50] <rspijker> “services/facebook.id”:”123456788”
[10:01:22] <greybrd> hi guys. the number of open client connections to a mongodb server is always constant ( max used ). it is not actually closing and connection are still active. any inputs to close them after a db operation is done.
[10:04:07] <Qalqi> greybrd: that can be done.
[10:04:27] <greybrd> Qalqi: thanks a lot. how? any pointers?
[10:06:06] <greybrd> I'm actually using only one MongoClient object as a static variable and I'm sharing it across the threads, against my previous logic, which will instantiate new MongoClient in every thread.
[10:07:05] <greybrd> even the single MongoClient object as a static variable has constant connections. when opened it's not getting closed.
[10:07:39] <greybrd> Qalqi: anyway I can close them? when not used or idle?
[10:09:01] <Nodex> MongoClient operates a pool of connections
[10:11:43] <greybrd> Nodex: yeah. I came to know about that. is there a way I can avoid this pooling or close the connection in the pool of my choice.
[10:15:17] <Qalqi> yes
[10:22:10] <greybrd> Qalqi: sorry for my ignorance. can you tell me how or any Internet link which explains this?
[10:26:34] <fontanon> Hi everybody, I've a newbie question with indexes ... If I'm making queries like db.mycol.find({FieldA:'valueA'}).sort({FieldB:-1}) which would be the best index to create? I'm in doubt between creating an index for FieldB or creating a compound index for FieldA/FieldB
[10:31:30] <Nodex> which version?
[10:32:04] <greybrd> mongo : 2.6.1, java driver : 2.11.3
[10:32:06] <Nodex> 2.6 has better index intersection
[10:32:46] <Nodex> greybrd : I am not sure but it's probably not the best idea to go outside the pooling internals of Mongodb
[10:33:43] <greybrd> ohhh.. I see. so the I shouldn't be bothering about the active open connections?
[10:34:25] <Nodex> no
[10:35:23] <greybrd> Nodex: no meaning, I don't have to worry?
[10:41:33] <Nodex> no you shouldn't worry
[12:34:41] <arussel> is there a way to cast float to int in aggregate ?
[12:35:13] <arussel> I want to go from 3 / 2 => 1
[12:39:27] <arussel> I have some time in millisecond and try to group value withing the same minute
[12:43:31] <arussel> ideally without using substract, divide, mod divide ....
[12:52:05] <apeeyush> Hi! I have a collection.. I run an aggregation query on it that may take minutes (large data).. Will I be able to insert new documents or perform other queries..?
[12:53:33] <kali> apeeyush: yeah, it does not lock
[12:55:49] <apeeyush> ok that's great!
[12:57:13] <apeeyush> Also, I will be mainly inserting and querying my data.. (No updates at all). Any benchmarks of using MongoDB vs SQL(may be postgres) for indexed or non-indexed data...
[12:57:35] <apeeyush> My data is only partially structured...
[12:57:43] <spacepluk> hi, I'm trying to use mapreduce with mongoose/nodejs. Is it possible to use closures for map/reduce functions?
[13:07:04] <rspijker> arussel: there isn’t anything like round or floor in there
[13:09:44] <arussel> cast would be nice, cast to string for concat and cast to int as a 'cheap' floor
[14:17:10] <MANCHUCK> why would my primary take up 3 times the amount of disk space as the secondaries ?
[14:17:23] <MANCHUCK> and can I clean that up somehow ?
[14:30:47] <rspijker> MANCHUCK: could be all kinds of reasons
[14:31:02] <rspijker> MANCHUCK: the only way to actually reclaim disk space is to do a repairDatabase
[14:40:05] <MANCHUCK_> rspijke the doc for repairDatabase says not to do that with a replica set
[14:40:40] <rspijker> MANCHUCK_: I think it will say not to do that on production…
[14:40:48] <rspijker> still, it’s the only real option
[14:41:10] <rspijker> the other option is to remove all your data and resync
[14:41:22] <rspijker> but that’s not really any better
[14:41:59] <MANCHUCK_> rspijker, thanks just testing to see if there are other ways
[14:42:13] <rspijker> there aren’t
[14:43:29] <rspijker> MANCHUCK_: http://docs.mongodb.org/manual/reference/command/repairDatabase/
[14:43:35] <rspijker> see the very last paragraph
[14:43:45] <rspijker> “However, if you trust that there is no corruption and you have enough free space, then repairDatabase is the appropriate and the only way to reclaim disk space.”
[14:44:17] <MANCHUCK_> yea i see that
[14:44:46] <MANCHUCK_> i was stuck on the net paragraph
[14:44:47] <MANCHUCK_> "If you are trying to repair a replica set member, and you have access to an intact copy of your data (e.g. a recent backup or an intact member of the replica set), you should restore from that intact copy, and not use repairDatabase.
[14:51:47] <kali> it means that repairDatabase can be used to try to salvage whatever can be salvaged if you're that desperate. but it won't break anything if your database is sound and you just want to reclaim space
[14:53:42] <cored> hi all
[14:53:52] <cored> is there any documentation regarding scaling mongodb in amazon ec2
[14:53:54] <cored> ?
[14:55:01] <tscanausa> it would the same as scaling any mongo instance…
[14:55:20] <cored> and is there documentation for scaling any mongo instance?
[14:56:15] <tscanausa> http://docs.mongodb.org/manual/sharding/
[14:56:19] <cored> thanks
[16:40:26] <fontanon> Hi everybody, I've a newbie question with indexes ... If I'm making queries like db.mycol.find({FieldA:'valueA'}).sort({FieldB:-1}) which would be the best index to create? I'm in doubt between creating an index for FieldB or creating a compound index for FieldA/FieldB
[16:44:57] <Nodex> which VERSION of mongodb
[16:51:56] <fontanon> Nodex, 2.4
[16:52:20] <Nodex> you will need a compound index then
[16:52:38] <Nodex> not even sure that will cut it though.
[16:53:03] <Nodex> it will probably be two indexes on 2.4 - one on A one on B
[16:58:11] <kali> fontanon: Nodex: naaaaaa a compound, { FieldA: 1, FieldB: -1}
[16:58:38] <kali> (the -1 is not critically important)
[16:59:44] <fontanon> Nodex, why did you asked me about the mongodb version? was it a significative enhacement or changes in mongodb 2.6 related with index?
[17:00:02] <fontanon> kali, I guess -1 is for making descending sorts more efficient, am I wrong?
[17:01:02] <kali> fontanon: there are significant enhancement, like the ability to compound indexes
[17:01:25] <kali> fontanon: in 2.4 and before mongodb can only deal with one index when dealing with one query
[17:03:13] <kali> fontanon: the -1 will be marginally different. with this compound index, mongodb will just perform a lookup in the index then scan the index, and it will find the document it needs to send you back in the right order
[17:03:21] <kali> fontanon: this is ideal, even with 2.6
[17:04:24] <kali> fontanon: 2.6 can handle less ideal cases in a less catastrophic way than 2.4, but if this query is important for you use case, just create the "right" compound index. you'll save a few polar bears
[17:08:37] <fontanon> kali, thanks
[17:13:26] <Nodex> kali: *facepalm*
[17:13:43] <Nodex> I was looking at my answer and thinking "that's not right LOL"
[17:19:17] <kali> good. you're not mad then :)
[17:22:10] <Nodex> I wouldn't quite say that!
[17:56:15] <NaN> what is "/var/lib/mongo/journal/prealloc.0" for?
[17:56:53] <Nodex> pre-allocation of a file for the journal so it can be in the FS inodes next to the next one
[17:58:16] <NaN> it seems it's the cause of why mongod service don't run
[17:59:16] <Nodex> in what way?
[17:59:39] <NaN> prealloc.1 and .2 is created by mongo user, but .0 is created by root
[18:00:05] <Nodex> did you change the user of mongod at any point?
[18:00:47] <Nodex> all of mine are owned by the mongodb user
[18:01:21] <NaN> nope
[18:01:51] <NaN> I just created /var/run/mongodb/mongod.pid with sudo because log tells me wasn't present
[18:01:58] <kali> you probably ran mongod as root at some point... to check that it was running
[18:03:32] <NaN> how can I be sure it's not running? kill all the mongod and try run the service again
[18:03:45] <kali> yeah, "ps" ...
[18:04:31] <NaN> ps | grep mongo .... nothing
[18:05:19] <kali> i'm amazed at the noise the small colombian community in paris manages to make
[18:05:49] <Nodex> ps ax | grep mongodb
[18:05:49] <kali> and it was the same last saturday
[18:06:55] <NaN> Nodex: only the grep call >> 3748 pts/2 S+ 0:00 grep --color=auto mongodb
[18:07:01] <NaN> I mean, the ps call
[18:08:27] <kali> good luck with the air controller strike :/
[18:08:44] <Nodex> on MOnday?
[18:08:58] <Nodex> even in Toulouse airport?
[18:09:39] <kali> it's supposed to start on Tuesday
[18:09:44] <kali> so you should be fine
[18:09:47] <Nodex> ah, might get away with it then haha
[18:09:56] <Nodex> as long as doens't happen on 7h July also
[18:10:00] <Nodex> 7th*
[18:10:30] <kali> but strikes in Toulouse can be strong (in general, not sure about air control specifically)
[18:10:54] <NaN> is mongod.lock the only file I can remove in order to "reset" a clean start for the mongod service, or may I remove another file?
[18:11:54] <kali> (i hope what these guys fire are firecrackers)
[18:12:33] <kali> NaN: do you have data in this server or is it a fresh install ?
[18:13:27] <NaN> kali: yes I have data, it's a dev server
[18:14:49] <kali> NaN: in that case i would start with a chown where these wrongly attributed files are
[20:28:54] <AliRezaTaleghani> guys...
[20:29:15] <AliRezaTaleghani> I have a serious problem with my production sharding
[20:30:03] <AliRezaTaleghani> http://pastie.org/9306396
[20:30:17] <AliRezaTaleghani> if check this pastebin: http://pastie.org/9306396
[20:30:39] <AliRezaTaleghani> my sh.status() show both of my shard!
[20:31:16] <AliRezaTaleghani> but when I cheeck the shared collection! it's self status! it dosen't show archive shard status
[20:31:35] <harttho> Anyone know what would cause a field to become NaN? The field starts at 0 and is only ever incremented with $inc
[20:37:41] <joshua> AliRezaTaleghani: That looks like all your chunks are on one shard. Is the balancer enabled?
[20:37:53] <AliRezaTaleghani> yep
[20:38:02] <AliRezaTaleghani> it's actively enable
[20:38:16] <AliRezaTaleghani> to be honest! I did an strange plan
[20:38:20] <AliRezaTaleghani> let me explain...
[20:38:35] <AliRezaTaleghani> my master mongod was totaly exided disk space
[20:38:48] <AliRezaTaleghani> and I have some SATA disk availble
[20:39:10] <AliRezaTaleghani> so start a Tier2 sharding - I mean Tag based shard
[20:39:45] <AliRezaTaleghani> all was perfect unless that the reserved space on SSD disks with primary Shard000 was not released to OS
[20:40:06] <AliRezaTaleghani> so stopped all sub systems
[20:40:09] <AliRezaTaleghani> I mean
[20:40:13] <AliRezaTaleghani> 3 configservers
[20:40:17] <AliRezaTaleghani> monogs
[20:40:25] <AliRezaTaleghani> and both shards
[20:40:40] <AliRezaTaleghani> then backup almost any things
[20:40:54] <AliRezaTaleghani> then flush shard000
[20:41:11] <AliRezaTaleghani> and use mongorestore to restore all it's reall data
[20:41:30] <AliRezaTaleghani> by this process I get back all free space (~ 150GB)
[20:42:03] <AliRezaTaleghani> we restart the shard and balance! there was some thinks that hint me to config db
[20:42:14] <AliRezaTaleghani> and changeslogs, chunks collections
[20:42:30] <AliRezaTaleghani> which still contined old shard000 db
[20:42:49] <AliRezaTaleghani> so again stop balancer and flush both collections!
[20:42:53] <AliRezaTaleghani> :-)
[20:43:01] <AliRezaTaleghani> now blancer is active
[20:43:25] <AliRezaTaleghani> and currenlty it's moveing old docs to archive taged shard001
[20:43:26] <joshua> Hmm I noticed you have tags, thats not something I am familiar with
[20:43:47] <joshua> Maybe the tags are interefering with what data gets put where
[20:43:49] <AliRezaTaleghani> but as u see... the mongos dons't contain both shards
[20:45:24] <AliRezaTaleghani> I'm not sure! what happed if I wait for Balance to finish it's process! maybe after an scanning round! anything get done! nicely but I;m not sure :-/
[20:45:29] <joshua> It lists them at the top in shards, so the cluster should be aware of both of them but I think you might have something else going on.
[20:45:53] <joshua> at the end of the sh.status output it shows the ranges for the tags
[20:47:14] <AliRezaTaleghani> yep it show both tags and also shards!
[20:47:25] <joshua> If you want them balanced with even amounts of data you would need a different shart key. But maybe it wasn't designed to split it up for performance. It looks like its putting older data on one shard
[20:47:43] <AliRezaTaleghani> but collections status is not aggregate both shardes
[20:48:20] <AliRezaTaleghani> joshua: let me ask this like
[20:48:24] <AliRezaTaleghani> as an example
[20:48:43] <joshua> I've never done tags, so I am not a lot of help. :)
[20:48:44] <AliRezaTaleghani> if I have two seprate mongod with alike db and collections
[20:49:00] <AliRezaTaleghani> and I start a shard! between both!
[20:49:39] <joshua> What is the oldest data in your database? The oldest value for dateCreated
[20:50:04] <joshua> The tags say the data will be split on ISODate("2014-06-01T00:00:00Z")
[20:50:05] <AliRezaTaleghani> old are ISODate
[20:50:29] <AliRezaTaleghani> yep via the Tag based sharding I'm just moveing old docs to an slower shard
[20:50:39] <joshua> If I am understanding it correctly, anything older than that date goes on the one with the archive tag, shard0001
[20:50:53] <AliRezaTaleghani> 1+
[20:51:02] <AliRezaTaleghani> exactly
[20:51:52] <harttho> Anyone know what would cause a field to become NaN? The field starts at 0 and is only ever incremented with $inc
[20:52:58] <joshua> AliRezaTaleghani: Maybe try doing this with -1 and 1 and see your date ranges: db.bi-reporting.find({},{"dateCreated":-1}).limit(1)
[20:53:23] <joshua> Eh or find by the date in the tag and do a count
[20:53:53] <AliRezaTaleghani> let me check it
[20:54:18] <AliRezaTaleghani> but I'm sure about ech of which seprately ;-)
[20:55:04] <joshua> something like this (my syntax might be off) db.bi-reporting.find({"dateCreated" : {$gt: ISODate("2014-06-01T00:00:00Z")} })
[20:56:30] <joshua> I'm heading out, but that might help you figure out what your data looks like which those tags are using to split it.
[21:19:35] <mst1228> hi, i'm using Mongo with Node and have a question about doing queries based on the return of another query
[21:20:08] <mst1228> i definitely understand that there are no joins in Mongo, and if it was up to me, my db would be organized differently
[21:20:29] <mst1228> for various reasons, i have these two seperate collections, both storing user info
[21:20:54] <mst1228> one is 'user' collection, this is the detailed info for a user (name, username, email, etc)
[21:21:27] <mst1228> another collection, 'peoplemeta', has meta data associated to a user. things like tags and previous employers
[21:22:07] <mst1228> I want to get all the documents in the 'people' collection, then add all other those users detail info to the return
[21:22:29] <mst1228> i don't have an _id ref (again, if it was up to me...)
[21:22:52] <mst1228> but there is one unique field in 'people' collection that will match a field in the 'user' collection
[21:23:41] <mst1228> sorry for the setup, the question is, do i just have to loop through the first query return and do an individual query for each document in it?
[21:55:38] <tinuviels> Hi there, fast, easy question, I'm Reading this: http://docs.mongodb.org/manual/tutorial/query-documents/
[21:56:04] <tinuviels> there is this query: db.inventory.find( { 'producer.company': 'ABC123' } )
[21:56:21] <tinuviels> I want to do something like: db.inventory.find( { 'producer.company': '*} )
[21:56:23] <harttho> Anyone know what would cause a field to become NaN? The field starts at 0 and is only ever incremented with $inc
[21:56:55] <tinuviels> so to simply returen all documents that contains producer.company
[21:57:16] <tinuviels> any ides?
[21:59:07] <joannac> harttho: overflow?
[21:59:26] <harttho> Does mongo default to NaN on int overflow?
[21:59:27] <joannac> tinuviels: what does that mean? if the field exists?
[22:00:57] <tinuviels> joannac yes, if exist, or even better would be: if contain at least one element
[22:03:06] <joannac> tinuviels: what does that mean? should it be an array? a subdocument?
[22:04:58] <tinuviels> let's fallow this example: 'producer.company': 'ABC123', I want to find all documents, that contain something in "company". All documents will have "producer" but not all will have company.
[22:05:23] <tinuviels> ^joannac
[22:06:25] <joannac> tinuviels: I'm trying to make you figure out what you want
[22:06:33] <joannac> does {producer: {company: 1}} count?
[22:06:43] <joannac> what about {producer: {company: ""}}
[22:06:54] <joannac> what about {producer: {company: null}}
[22:08:39] <tinuviels> joannac: This is not returning document: {producer: {company: "ABC123"}}
[22:09:20] <tinuviels> same for rest
[22:09:28] <joannac> tinuviels: I gave you 3 examples above. should all of them be returned for your query or not?
[22:10:38] <tinuviels> joannac - got it now, sorry. Ideal would be to count only first one, so when it's not empty
[22:11:43] <tinuviels> however I'm almost 100% that in this DB if document contain "company" then it also contain something inside
[22:12:00] <tinuviels> so there shouldn't be situations with empty or null
[22:14:51] <joannac> db.irc.find({"producer.company": {$nin: [null, ""]}})
[22:14:55] <joannac> that's how I would do it
[22:15:56] <tinuviels> I've just found this: db.irc.find({producer.company": {$exists: true})
[22:16:20] <tinuviels> I will check your version
[22:16:27] <tinuviels> may be smarter
[22:16:32] <tinuviels> thanks Joannac
[22:31:27] <joannac> $exists true includes empty string
[23:01:13] <notaninja> is it possible to change field values of a document reference after it has been populated?